Anna Sofia Lippolis
Anna Sofia Lippolis
Seconding that, it's slow and consumes a lot of GPU
@dosu-bot and how to query from the example you wrote? If I try with this snippet it won't work ``` llm = OpenAI(temperature=0, model="text-davinci-002") service_context = ServiceContext.from_defaults(llm=llm, chunk_size_limit=512) index =...
@dosu-bot how to add StorageContext and GraphStore in order to query the knowledge graph?
@dosu-bot ``` from llama_index import KnowledgeGraphIndex, TextNode, ServiceContext from llama_index.llms import OpenAI llm = OpenAI(temperature=0, model="text-davinci-002") service_context = ServiceContext.from_defaults(llm=llm, chunk_size=512) index = KnowledgeGraphIndex([], service_context=service_context) tuples = [ ("foo", "is", "bar"),...
@dosu-bot How to import an existing knowledge graph on Neo4J and query it? More specifically how to load it. Code: llm = OpenAI(temperature=0, model="gpt-3.5-turbo") service_context = ServiceContext.from_defaults(llm=llm, chunk_size=512) Define Neo4j...
Following
> Write down the execution steps of linear_search(list(["Coraline", "American Gods", "The Graveyard Book", "Good Omens", "Neverwhere"]), "The Sandman") A linear search finds a value in a list and then return...
# Test case for the algorithm def stack_from_list(input_list, value_to_search, expected): result =stack_from_list(input_list, value_to_search) if expected == result: return True else: return False
> Consider the set created in the first exercise, stored in the variable my_set. Describe the status of my_set after the execution of each of the following operations: my_set.remove("Bilbo"), my_set.add("Galadriel"),...
> Write a pseudocode in Python so as to create a set of the following elements: "Bilbo", "Frodo", "Sam", "Pippin", "Merry". set_tolkien= () set_tolkien.add("Bilbo") set_tolkien.add("Frodo") set_tolkien.add("Sam") set_tolkien.add("Pippin") set_tolkien.add("Merry") Current status...