-
-
Save lifan0127/e34bb0cfbf7f03dc6852fd3e80b8fb19 to your computer and use it in GitHub Desktop.
import os | |
os.environ['OPENAI_API_KEY'] = '<Your OpenAI API Key>' | |
# See here on how to find your Zotero info: https://github.com/urschrei/pyzotero#quickstart | |
ZOTERO_USER_ID = '<Your Zotero User ID>' | |
ZOTERO_API_KEY = '<Your Zotero API Key>' | |
ZOTERO_COLLECTION_ID = '<Your Zotero Collection ID>' | |
question = 'What predictive models are used in materials discovery?' | |
# The following prompt instruction is injected to limit the number of keywords per query | |
question_prompt = 'A "keyword search" is a list of no more than 3 words, which separated by whitespace only and with no boolean operators (e.g. "dog canine puppy"). Avoid adding any new words not in the question unless they are synonyms to the existing words.' | |
from paperqa import Docs | |
from pyzotero import zotero | |
import requests | |
import shutil, sys, re | |
from bs4 import BeautifulSoup | |
docs = Docs() | |
queries = docs.generate_search_query(question + '\n' + question_prompt) | |
print(f'Search queries: {", ".join(queries)}') | |
zot = zotero.Zotero(ZOTERO_USER_ID, 'user', ZOTERO_API_KEY) | |
searches = [zot.collection_items( | |
ZOTERO_COLLECTION_ID, | |
q=q.strip('"'), | |
limit=10, | |
itemType='attachment', | |
qmode='everything' | |
) for q in queries] | |
attachments = {item['key']: item for search in searches for item in search if item['data']['contentType'] == 'application/pdf'}.values() | |
parents = set([a['data']['parentItem'] for a in attachments]) | |
citation_dict = {p: zot.item(p, content='bib', style='american-chemical-society')[0] for p in parents} | |
result_count = len(parents) | |
if (result_count == 0): | |
print(f'No matched results in Zotero') | |
sys.exit() | |
print(f'Results: {result_count}') | |
paths = [] | |
citations = [] | |
for attachment in attachments: | |
link_mode = attachment['data']['linkMode'] | |
file_path = f'data/zotero/{attachment["key"]}.pdf' | |
parent = citation_dict[attachment['data']['parentItem']] | |
if link_mode == 'imported_file': | |
zot.dump(attachment['key'], f'{attachment["key"]}.pdf', 'data/zotero') | |
elif link_mode == 'linked_file': | |
shutil.copy(attachment['data']['path'], file_path) | |
elif link_mode == 'imported_url': | |
res = requests.get(attachment['data']['url']) | |
with open(file_path, 'wb') as f: | |
f.write(res.content) | |
else: | |
raise ValueError(f'Unsupported link mode: {link_mode} for {attachment["key"]}.') | |
paths.append(file_path) | |
citations.append(re.sub("^\(\d+\)\s+", "", BeautifulSoup(parent, 'html.parser').get_text().strip())) | |
for d, c in zip(paths, citations): | |
docs.add(d, c) | |
answer = docs.query(question) | |
with open('data/zotero-answer.txt', 'w') as f: | |
f.write(answer.formatted_answer) | |
Hello @lifan0127, good morning. In his hugging face app, the user "ryanrwatkins" talked about using the pickle code block to reuse embeddings. As I am only working with this code, and not with your app built with gradio, where this saving and loading functionality could be implemented in your code (sorry for being a newbie and asking boring questions, but believe me: I started studying Python precisely because of your code. I'm an R user and a PhD in Economics student and it has been very useful for the assisted construction of my Literature Review).
My solution is include in line 19 this block:
if not os.path.exists("data/paperqa/my_docs.pkl"):
docs = Docs()
with open("data/paperqa/my_docs.pkl", "rb") as f:
docs = pickle.load(f)
And include in line 70:
with open("data/paperqa/my_docs.pkl", "wb") as f:
pickle.dump(docs, f)
Is it correct???
@Edilson-R Happy to see your progress!
For regular API calls, the underlying LangChain package has cached the results in a local SQLite database. I believe caching for embeddings is not supported by LangChain yet. However, there is an open PR for this feature: langchain-ai/langchain#1930
Once it is added, it would be easy for us to reuse the embeddings transparently.
Thanks for this - I have an issue with .pdf read as follows:
An error occurred while reading PDF Zotero/storage/QI6TJKBR.pdf: EOF marker not found
An error occurred while reading PDF Zotero/storage/U4XJVRY7.pdf: EOF marker not found
An error occurred while reading PDF Zotero/storage/64WW8VRH.pdf: EOF marker not found
An error occurred while reading PDF Zotero/storage/U5CYRA4T.pdf: EOF marker not found
An error occurred while reading PDF Zotero/storage/L6RLDESA.pdf: EOF marker not found
I've checked the pdfs manually and written a script to ensure EOF Marker is present. Is there a way to force read .pdf files with this error. Thanks again!
Hi @jalalawan I am not sure why the error occurred for you. Does it happen to all your PDF files?
Thanks for responding - it's an issue with only some of my pdf files, I'll try forcing an EOF Marker or download fresh pdf files.
I had another question - how do I change the default GPT-3.5 engine to GPT-4, and change temperature settings, max_token values in the script. I am trying to use the following for GPT-4, but keep getting "Engine not found" error:
llm_gpt4 = AzureOpenAI(
deployment_name="gpt-4-v0314-base",
temperature=0.1,
model_name='gpt-4',
max_tokens=7000)
and using it in the Docs class in the code as follows:
docs = Docs(llm_gpt4)
Appreciate your insights!
@jalalawan Sorry, I don't have experience with Azure OpenAI. Does your account (API key) have access to GPT-4?
@jalalawan Sorry, I don't have experience with Azure OpenAI. Does your account (API key) have access to GPT-4?
Appreciate your response - I do have access to GPT-4, I also looked at the API documentation and it looks like the default for Docs() is GPT3.5, I could not find an option to set temperature, max_tokens parameters.
All said, like others have mentioned here, really appreciate your contribution and making the LLM experience less hallucinatory and better-suited for research.
Thank you for sharing this - does anyone have experience with how this approach compares to using vector databases that the Chatgpt retrival plugin is advocating?
@andreifoldes I think the retrieval mechanisms are the same. This approach uses the FAISS library for vector similarity based search to find relevant document chunks and then feed them into LLMs for response synthesis.
Thank you - did you or anyone play around with the different libraries, would there be a reason for one outperforming the rest when it comes to academic Q&A tasks?
Yes, I think the retrieval (vector embeddings) are based on cosine similarity metric for FAISS / GPT etc., the Q&A performance depends largely on whether the model is fine-tuned and/or prompt templates (see langchain library).
I created a summarization and Q&A app also using GPT (and NLTK library for chunking / tokenization). Seems to work well for <15 pages. I'm keeping it open for folks to test for a couple days - please do check out and provide feedback:
https://powerful-dusk-64631.herokuapp.com/
Thanks,
Jalal
Hey,
I wanted to try out the script but it seems that the paper-qa package is updated and the generate_search_query method doesn't exist anymore. I tried a workaround but I am not sure, if it is correct, since it doesn't use a method from Docs anymore. Can someone have a look?
Note: I also tried to change ChatGPT to llama2
See here on how to find your Zotero info: https://github.com/urschrei/pyzotero#quickstart
ZOTERO_USER_ID = ''
ZOTERO_API_KEY = ''
ZOTERO_COLLECTION_ID = ''
question = 'How is deep learning used for clustering mass spectra?'
The following prompt instruction is injected to limit the number of keywords per query
question_prompt = 'A "keyword search" is a list of no more than 3 words, which separated by whitespace only and with no boolean operators (e.g. "dog canine puppy"). Avoid adding any new words not in the question unless they are synonyms to the existing words.'
from bs4 import BeautifulSoup
import requests
import shutil
import re
from paperqa import Docs
from pyzotero import zotero
import requests
import shutil, sys, re
from bs4 import BeautifulSoup
Your Docs class implementation here
docs = Docs()
Generate search queries manually (assuming you don't have the generate_search_query method)
keywords = [word.lower() for word in question.split() if len(word) > 2]
queries = [f'"{keyword}"' for keyword in keywords]
print(queries)
zot = zotero.Zotero(ZOTERO_USER_ID, 'user', ZOTERO_API_KEY)
searches = [zot.collection_items(
ZOTERO_COLLECTION_ID,
q=q.strip('"'),
limit=10,
itemType='attachment',
qmode='everything'
) for q in queries]
print(f'searches:{searches}')
attachments = {item['key']: item for search in searches for item in search if item['data']['contentType'] == 'application/pdf'}.values()
parents = set([a['data']['parentItem'] for a in attachments])
citation_dict = {p: zot.item(p, content='bib', style='american-chemical-society')[0] for p in parents}
result_count = len(parents)
print(f'attachments:{attachments}')
print(f'parents:{parents}')
if result_count == 0:
print(f'No matched results in Zotero')
sys.exit()
print(f'Results: {result_count}')
Define the directory where PDF files will be saved
pdf_directory = 'data/zotero_pdfs/'
if not os.path.exists(pdf_directory):
os.makedirs(pdf_directory)
paths = []
citations = []
for attachment in attachments:
link_mode = attachment['data']['linkMode']
file_path = os.path.join(pdf_directory, f'{attachment["key"]}.pdf')
parent = citation_dict[attachment['data']['parentItem']]
if link_mode == 'imported_file':
zot.dump(attachment['key'], f'{attachment["key"]}.pdf', pdf_directory)
elif link_mode == 'linked_file':
shutil.copy(attachment['data']['path'], file_path)
elif link_mode == 'imported_url':
res = requests.get(attachment['data']['url'])
with open(file_path, 'wb') as f:
f.write(res.content)
else:
raise ValueError(f'Unsupported link mode: {link_mode} for {attachment["key"]}.')
paths.append(file_path)
citations.append(re.sub("^(\d+)\s+", "", BeautifulSoup(parent, 'html.parser').get_text().strip()))
for d, c in zip(paths, citations):
docs.add(d, c)
answer = docs.query(question)
print (answer)
with open('data/zotero-answer.txt', 'w') as f:
f.write(answer.formatted_answer)
`
@JannikSchneider12 This script was written several months ago. It hasn't been tested with the latest paper-qa
release.
Meanwhile, I see the paper-qa
package has added integration with Zotero. Have you checked it out yet?
https://github.com/whitead/paper-qa/blob/main/paperqa/contrib/zotero.py
@lifan0127 Thanks for your reply. I will have a look at it, but I am still at the very beginning regarding programming.
Btw is there a way to still run your script if I use the exact versions that you used there?
Again thanks for your help and time
@JannikSchneider12 Please check out this Hugging Face space: https://huggingface.co/spaces/lifan0127/zotero-qa, where you can ask questions based on your Zotero library without programming.
Also, I am working on a Zotero plugin to incorporate paper QA, among other feature, into Zotero. Please check out if you are interested: https://github.com/lifan0127/ai-research-assistant
This is interesting, thank you. I really like the idea of optimizing literature reviews using tools like Paper QA and Zotero. These tools can greatly simplify and speed up the process of searching and analyzing scientific articles, helping you save time and improve the quality of your work. I had a similar project and I asked do my homework, I found https://edubirdie.com/do-my-homework for this. Now I know a lot about this myself and can give a lot of advice. The main thing is to use these tools effectively.
I managed to get the code to run. Thank you, this will help me a lot in my current PhD in Industrial Economics. Much appreciated!!!!!