DEV Community

Cover image for Elastic D&D - Update 9 - FastAPI
Joe
Joe

Posted on • Updated on

Elastic D&D - Update 9 - FastAPI

Last week we talked about the changes to the Streamlit application. If you missed it, you can check that out here!

FastAPI

FastAPI is a Python library used for creating, you guessed it, APIs. As the name implies, it's quick and completely custom, which is powerful.

Currently, I have a few endpoints built; both of which help with the functionality of Veverbot. Here's the full API:

# Elastic D&D
# Author: thtmexicnkid
# Last Updated: 10/04/2023
# 
# FastAPI app that facilitates Virtual DM processes and whatever else I think of.

import uvicorn
from fastapi import FastAPI

app = FastAPI()

@app.get("/")
async def root():
    return {"message":"Hello World"}

@app.get("/get_vector_object/{text}")
async def get_vector_object(text):
    import openai

    openai.api_key = "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"
    embedding_model = "text-embedding-ada-002"
    openai_embedding = openai.Embedding.create(input=text, model=embedding_model)

    return openai_embedding["data"][0]["embedding"]

@app.get("/get_question_answer/{question}/{query_results}")
async def get_question_answer(question,query_results):
    import openai

    summary = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Answer the following question:" 
            + question 
            + "by using the following text:" 
            + query_results},
        ]
    )

    answers = []
    for choice in summary.choices:
        answers.append(choice.message.content)

    return answers

if __name__ == '__main__':
    uvicorn.run("main:app", port=8000, host='0.0.0.0', reload=True)
Enter fullscreen mode Exit fullscreen mode

API Endpoints

You define custom API endpoints with @app.get(). The great thing about FastAPI is that it can handle variable input, which is done by including {variable_name} in the endpoint path. Multiple variable input is supported as well!

Root

The root endpoint is simply here to allow us to test if we can access the API from remote locations. If you see "Hello World", then you're good to go!

@app.get("/")
async def root():
    return {"message":"Hello World"}
Enter fullscreen mode Exit fullscreen mode

Get Vector Object

This endpoint does exactly what the name says: gets a vector object of the variable text input. We then use this vector object in KNN queries to assist Veverbot in returning helpful results.

@app.get("/get_vector_object/{text}")
async def get_vector_object(text):
    import openai

    openai.api_key = "API_KEY"
    embedding_model = "text-embedding-ada-002"
    openai_embedding = openai.Embedding.create(input=text, model=embedding_model)

    return openai_embedding["data"][0]["embedding"]
Enter fullscreen mode Exit fullscreen mode

Get Question Answer

Again, this endpoint does exactly what the name says: returns an answer to a question that is asked to Veverbot. There are two variables here -- the question that is asked to Veverbot, and the KNN query results of the asked question. Both of these are sent to OpenAI and a sentence(s) answer is returned, which is used for Veverbot's response.

@app.get("/get_question_answer/{question}/{query_results}")
async def get_question_answer(question,query_results):
    import openai

    summary = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Answer the following question:" 
            + question 
            + "by using the following text:" 
            + query_results},
        ]
    )

    answers = []
    for choice in summary.choices:
        answers.append(choice.message.content)

    return answers
Enter fullscreen mode Exit fullscreen mode

Closing Remarks

This is a work-in-progess. I have plans to add more endpoints, mainly moving some Python functions over here since it would keep some of the larger ones together. Once I get audio transcription swapped from AssemblyAI to something free, I will probably move that to the API as well.

Check out the GitHub repo below. You can also find my Twitch account in the socials link, where I will be actively working on this during the week while interacting with whoever is hanging out!

GitHub Repo
Socials

Happy Coding,
Joe

Top comments (0)