-
-
Save mberman84/2ad782e90d18650dfdf42d677c18c520 to your computer and use it in GitHub Desktop.
git clone https://github.com/OpenDevin/OpenDevin.git | |
cd OpenDevin | |
conda create -n od python=3.10 | |
conda activate od | |
docker ps | |
(optional) install docker if not already installed | |
docker pull ghcr.io/opendevin/sandbox | |
export OPENAI_API_KEY={your key} | |
(optional I had to install rust) curl --proto '=https' --tlsv1.2 -sSf [https://sh.rustup.rs](https://sh.rustup.rs/) | sh | |
(optional) restart terminal | |
python -m pip install -r requirements.txt | |
(optional) orjson issue (MacOS) | |
- pip uninstall orjson | |
- pip install --no-cache-dir --only-binary :all: orjson | |
uvicorn opendevin.server.listen:app --port 3000 |
The term 'conda' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is
correct and try again.
I followed the steps from @dfsm, and am getting the following error:
AGENT ERROR: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2d27d0c310>: Failed to establish a new connection: [Errno 111] Connection refused'))when issuing instructions to the UI. Thoughts?
Maybe try updating and restarting ollama? I tried to recreate your error by messing with my config.toml, but I couldn't reproduce.
Similar error: ollama/ollama#1579
Quick question @dfsm, where should ollama be running? In WSL2 or the Windows host? Thanks!
Mine don't run the npm, always return that the npm command was not found or sudo was not found etc,
and I`m using wsl not sure why is that any idea on how can I debug?
$ /bin/bash: line 1: npm: command not found
The term 'conda' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
try .conda in the top search bar
Keep getting this
"Oops. Something went wrong: Error condensing thoughts: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: <YOUR OP*********KEY>. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}"
Keep getting this
"Oops. Something went wrong: Error condensing thoughts: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: <YOUR OP*********KEY>. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}"
Yes provided
Try to pip install npm?
…
On Mon, Apr 1, 2024, 16:44 Chutes @.> wrote: @.* commented on this gist. ------------------------------ Mine dont run the npm, always return that the npm command was not found or sudo was not found etc, and Im using wsl not sure why is that any idea on how can I debug? — Reply to this email directly, view it on GitHub https://gist.github.com/mberman84/2ad782e90d18650dfdf42d677c18c520#gistcomment-5008663 or unsubscribe https://github.com/notifications/unsubscribe-auth/BHO7XPW3XZ45SQH5XTOCWELY3HPLZBFKMF2HI4TJMJ2XIZLTSKBKK5TBNR2WLJDUOJ2WLJDOMFWWLO3UNBZGKYLEL5YGC4TUNFRWS4DBNZ2F6YLDORUXM2LUPGBKK5TBNR2WLJDHNFZXJJDOMFWWLK3UNBZGKYLEL52HS4DFVRZXKYTKMVRXIX3UPFYGLK2HNFZXIQ3PNVWWK3TUUZ2G64DJMNZZDAVEOR4XAZNEM5UXG5FFOZQWY5LFVEYTEOJTG4YDSMJZU52HE2LHM5SXFJTDOJSWC5DF . You are receiving this email because you commented on the thread. Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub .
same problem
Everything worked but I got this error. I guess I'm setting up the API key incorrectly. I did use "set OPENAI_API_KEY={your key}" instead of "export OPENAI_API_KEY={your key}" since I'm on Windows.
Oops. Something went wrong: OpenAIException - Traceback (most recent call last): File "C:\Users\USER\anaconda3\envs\vscode\Lib\site-packages\litellm\llms\openai.py", line 376, in completion raise e File "C:\Users\USER\anaconda3\envs\vscode\Lib\site-packages\litellm\llms\openai.py", line 312, in completion openai_client = openai( ^^^^^^^ File "C:\Users\USER\anaconda3\envs\vscode\Lib\site-packages\openai_client.py", line 98, in init raise openaiError( openai.openaiError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
I have basically the same thing, have you solved it? This is what I'm getting:
Oops. Something went wrong: Error condensing thoughts: OpenAIException - Traceback (most recent call last): File "C:\Users\Michael\anaconda3\Lib\site-packages\litellm\llms\openai.py", line 376, in completion raise e File "C:\Users\Michael\anaconda3\Lib\site-packages\litellm\llms\openai.py", line 312, in completion openai_client = openai( ^^^^^^^ File "C:\Users\Michael\anaconda3\Lib\site-packages\openai_client.py", line 98, in init raise openaiError( openai.openaiError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
I've tried setting the environment variable via ENV in windows and also using the set command both in the frontend and backend terminal instances and not had any luck.
ANY ONE HERE WHO CAN CONNECT ME OVER A MAIL TO RESOLVE MY ISSUE PLEASE
SHARE ME YOUR MAIL ID ON
(op) C:\Users\pc\Desktop\OpenDevin> uvicorn opendevin.server.listen:app --port 3000
Traceback (most recent call last):
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\asyncio\windows_events.py", line 434, in select
self._poll(timeout)
RuntimeError: <_overlapped.Overlapped object at 0x000002497669D740> still has pending operation at deallocation, the process may crash
Traceback (most recent call last):
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\asyncio\windows_events.py", line 434, in select
self._poll(timeout)
RuntimeError: <_overlapped.Overlapped object at 0x000002497669D740> still has pending operation at deallocation, the process may crash
Traceback (most recent call last):
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\Scripts\uvicorn.exe_main.py", line 7, in
sys.exit(main())
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1157, in call
return self.main(*args, **kwargs)
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1078, in main
rv = self.invoke(ctx)
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\main.py", line 409, in main
run(
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\main.py", line 575, in run
server.run()
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\server.py", line 65, in run
return asyncio.run(self.serve(sockets=sockets))
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 641, in run_until_complete
return future.result()
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\server.py", line 69, in serve
await self._serve(sockets)
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\server.py", line 76, in serve
config.load()
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\config.py", line 433, in load
self.loaded_app = import_from_string(self.app)
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\importer.py", line 19, in import_from_string
module = importlib.import_module(module_str)
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\importlib_init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in load_unlocked
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "C:\Users\pc\Desktop\OpenDevin\opendevin\server\listen.py", line 4, in
import agenthub # noqa F401 (we import this to get the agents registered)
File "C:\Users\pc\Desktop\OpenDevin\agenthub_init.py", line 5, in
from . import monologue_agent # noqa: E402
File "C:\Users\pc\Desktop\OpenDevin\agenthub\monologue_agent_init.py", line 2, in
from .agent import MonologueAgent
File "C:\Users\pc\Desktop\OpenDevin\agenthub\monologue_agent\agent.py", line 28, in
from agenthub.monologue_agent.utils.memory import LongTermMemory
File "C:\Users\pc\Desktop\OpenDevin\agenthub\monologue_agent\utils\memory.py", line 37, in
embed_model = HuggingFaceEmbedding(
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\embeddings\huggingface\base.py", line 86, in init
self._model = SentenceTransformer(
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\sentence_transformers\SentenceTransformer.py", line 191, in init
modules = self._load_sbert_model(
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\sentence_transformers\SentenceTransformer.py", line 1246, in _load_sbert_model
module = module_class.load(module_path)
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\sentence_transformers\models\Pooling.py", line 227, in load
with open(os.path.join(input_path, "config.json")) as fIn:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\pc\AppData\Local\llama_index\models--BAAI--bge-small-en-v1.5\snapshots\5c38ec7c405ec4b44b94cc5a9bb96e735b38267a\1_Pooling\config.json'
The term 'conda' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
You need to have conda installed in order to run it. If you use venv for virtual environments you can run these instead in the directory where you have the project files:
python -m venv od
od\Scripts\activate
python3.10 -m venv od
(op) C:\Users\pc\Desktop\OpenDevin> uvicorn opendevin.server.listen:app --port 3000 Traceback (most recent call last): File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\asyncio\windows_events.py", line 434, in select self._poll(timeout) RuntimeError: <_overlapped.Overlapped object at 0x000002497669D740> still has pending operation at deallocation, the process may crash Traceback (most recent call last): File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\asyncio\windows_events.py", line 434, in select self._poll(timeout) RuntimeError: <_overlapped.Overlapped object at 0x000002497669D740> still has pending operation at deallocation, the process may crash Traceback (most recent call last): File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main return run_code(code, main_globals, None, File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in run_code exec(code, run_globals) File "C:\Users\pc\AppData\Local\Programs\Python\Python310\Scripts\uvicorn.exe__main.py", line 7, in sys.exit(main()) File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1157, in call return self.main(*args, **kwargs) File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1078, in main rv = self.invoke(ctx) File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1434, in invoke return ctx.invoke(self.callback, **ctx.params) File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 783, in invoke return __callback(*args, **kwargs) File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\main.py", line 409, in main run( File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\main.py", line 575, in run server.run() File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\server.py", line 65, in run return asyncio.run(self.serve(sockets=sockets)) File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\asyncio\runners.py", line 44, in run return loop.run_until_complete(main) File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 641, in run_until_complete return future.result() File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\server.py", line 69, in serve await self.serve(sockets) File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\server.py", line 76, in serve config.load() File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\config.py", line 433, in load self.loaded_app = import_from_string(self.app) File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\importer.py", line 19, in import_from_string module = importlib.import_module(module_str) File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\importlib__init.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1050, in _gcd_import File "", line 1027, in find_and_load File "", line 1006, in find_and_load_unlocked File "", line 688, in load_unlocked File "", line 883, in exec_module File "", line 241, in call_with_frames_removed File "C:\Users\pc\Desktop\OpenDevin\opendevin\server\listen.py", line 4, in import agenthub # noqa F401 (we import this to get the agents registered) File "C:\Users\pc\Desktop\OpenDevin\agenthub__init.py", line 5, in from . import monologue_agent # noqa: E402 File "C:\Users\pc\Desktop\OpenDevin\agenthub\monologue_agent__init.py", line 2, in from .agent import MonologueAgent File "C:\Users\pc\Desktop\OpenDevin\agenthub\monologue_agent\agent.py", line 28, in from agenthub.monologue_agent.utils.memory import LongTermMemory File "C:\Users\pc\Desktop\OpenDevin\agenthub\monologue_agent\utils\memory.py", line 37, in embed_model = HuggingFaceEmbedding( File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\embeddings\huggingface\base.py", line 86, in init self._model = SentenceTransformer( File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\sentence_transformers\SentenceTransformer.py", line 191, in init modules = self._load_sbert_model( File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\sentence_transformers\SentenceTransformer.py", line 1246, in _load_sbert_model module = module_class.load(module_path) File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\sentence_transformers\models\Pooling.py", line 227, in load with open(os.path.join(input_path, "config.json")) as fIn: FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\pc\AppData\Local\llama_index\models--BAAI--bge-small-en-v1.5\snapshots\5c38ec7c405ec4b44b94cc5a9bb96e735b38267a\1_Pooling\config.json'
got this kinda same error in ubuntu
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/llama_index/models--BAAI--bge-small-en-v1.5/snapshots/5c38ec7c405ec4b44b94cc5a9bb96e735b38267a/1_Pooling/config.json'
The term 'conda' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
You need to have conda installed in order to run it. If you use venv for virtual environments you can run these instead in the directory where you have the project files:
python -m venv od od\Scripts\activate python3.10 -m venv od
I did it and it still shows this : (OpenDevin-PwRe2zua) (base) C:\Users\pc\OpenDevin>uvicorn opendevin.server.listen:app --port
3000
Traceback (most recent call last):
File "C:\Users\pc\Documents\Anaconda\Lib\asyncio\windows_events.py", line 444, in select
self._poll(timeout)
RuntimeError: <_overlapped.Overlapped object at 0x00000201325CF630> still has pending operation at deallocation, the process may crash
Traceback (most recent call last):
File "C:\Users\pc\Documents\Anaconda\Lib\asyncio\windows_events.py", line 444, in select
self._poll(timeout)
RuntimeError: <_overlapped.Overlapped object at 0x00000201325CF630> still has pending operation at deallocation, the process may crash
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in run_code
File "C:\Users\pc.virtualenvs\OpenDevin-PwRe2zua\Scripts\uvicorn.exe_main.py", line 7, in
File "C:\Users\pc.virtualenvs\OpenDevin-PwRe2zua\Lib\site-packages\click\core.py", line 1157, in call
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc.virtualenvs\OpenDevin-PwRe2zua\Lib\site-packages\click\core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "C:\Users\pc.virtualenvs\OpenDevin-PwRe2zua\Lib\site-packages\click\core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc.virtualenvs\OpenDevin-PwRe2zua\Lib\site-packages\click\core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc.virtualenvs\OpenDevin-PwRe2zua\Lib\site-packages\uvicorn\main.py", line 409, in main
run(
File "C:\Users\pc.virtualenvs\OpenDevin-PwRe2zua\Lib\site-packages\uvicorn\main.py", line 575, in run
server.run()
File "C:\Users\pc.virtualenvs\OpenDevin-PwRe2zua\Lib\site-packages\uvicorn\server.py", line 65, in run
return asyncio.run(self.serve(sockets=sockets))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\Documents\Anaconda\Lib\asyncio\runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "C:\Users\pc\Documents\Anaconda\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\Documents\Anaconda\Lib\asyncio\base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "C:\Users\pc.virtualenvs\OpenDevin-PwRe2zua\Lib\site-packages\uvicorn\server.py", line 69, in serve
await self._serve(sockets)
File "C:\Users\pc.virtualenvs\OpenDevin-PwRe2zua\Lib\site-packages\uvicorn\server.py", line 76, in serve
config.load()
File "C:\Users\pc.virtualenvs\OpenDevin-PwRe2zua\Lib\site-packages\uvicorn\config.py", line 433, in load
self.loaded_app = import_from_string(self.app)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc.virtualenvs\OpenDevin-PwRe2zua\Lib\site-packages\uvicorn\importer.py", line 19, in import_from_string
module = importlib.import_module(module_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\Documents\Anaconda\Lib\importlib_init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 1204, in _gcd_import
File "", line 1176, in _find_and_load
File "", line 1147, in _find_and_load_unlocked
File "", line 690, in load_unlocked
File "", line 940, in exec_module
File "", line 241, in call_with_frames_removed
File "C:\Users\pc\OpenDevin\opendevin\server\listen.py", line 4, in
import agenthub # noqa F401 (we import this to get the agents registered)
^^^^^^^^^^^^^^^
File "C:\Users\pc\OpenDevin\agenthub_init.py", line 5, in
from . import monologue_agent # noqa: E402
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\OpenDevin\agenthub\monologue_agent_init.py", line 2, in
from .agent import MonologueAgent
File "C:\Users\pc\OpenDevin\agenthub\monologue_agent\agent.py", line 28, in
from agenthub.monologue_agent.utils.memory import LongTermMemory
File "C:\Users\pc\OpenDevin\agenthub\monologue_agent\utils\memory.py", line 37, in
embed_model = HuggingFaceEmbedding(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc.virtualenvs\OpenDevin-PwRe2zua\Lib\site-packages\llama_index\embeddings\huggingface\base.py", line 86, in init
self._model = SentenceTransformer(
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc.virtualenvs\OpenDevin-PwRe2zua\Lib\site-packages\sentence_transformers\SentenceTransformer.py", line 191, in init
modules = self._load_sbert_model(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc.virtualenvs\OpenDevin-PwRe2zua\Lib\site-packages\sentence_transformers\SentenceTransformer.py", line 1246, in _load_sbert_model
module = module_class.load(module_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc.virtualenvs\OpenDevin-PwRe2zua\Lib\site-packages\sentence_transformers\models\Pooling.py", line 227, in load
with open(os.path.join(input_path, "config.json")) as fIn:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\pc\AppData\Local\llama_index\models--BAAI--bge-small-en-v1.5\snapshots\5c38ec7c405ec4b44b94cc5a9bb96e735b38267a\1_Pooling\config.json'
Hi! Everything looks fine until I use it. Error: Oops. Something went wrong: 'NoneType' object has no attribute 'request'
I got most of the errors on here yesterday, the new version looks to have fixed all the issues and now getting Oops Something went wrong.
I got this error on mac after running latest version using make
i am getting
/opt/homebrew/Caskroom/miniconda/base/envs/myenv/bin/python" -m pip install -r requirements.txt
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
it can not find requirements
I managed to get it running on Windows. This is the summary of the process by GPT4:
Comprehensive Setup Guide for OpenDevin Project on Windows 11
Prerequisites
- Windows 11: Ensure your system is running Windows 11, for the latest WSL support.
- WSL Installed: WSL must be installed on your Windows 11 system. Follow Microsoft's guide on installing WSL.
- Ubuntu on WSL: Install Ubuntu from the Microsoft Store post-WSL setup. This Linux distribution is where the OpenDevin project setup occurs.
- Node.js: Required for frontend development.
- Python 3.11: Necessary for backend development.
- Pipenv: For managing Python packages and environments.
- Git: To clone the project repository.
Step-by-Step Guide
1. Setting Up WSL and Ubuntu
- Install WSL on Windows 11 by following the official instructions.
- Install Ubuntu from the Microsoft Store and set up your UNIX username and password upon launch.
2. Installing Node.js
- Install NVM (Node Version Manager) in Ubuntu to manage Node.js versions:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
- Install Node.js version 16 (or higher) using NVM:
nvm install 16 nvm use 16- Verify the Node.js installation:
node --version
.3. Setting Up Python and Pipenv
- Ensure Python 3.11 is installed:
python3 --version
.- Install Pipenv with pip:
pip install pipenv4. Cloning the OpenDevin Project
- Clone the OpenDevin repository into your desired directory:
git clone https://github.com/OpenDevin/OpenDevin.git cd OpenDevin
5. Backend Setup with Pipenv
- Set up the backend environment within the OpenDevin directory:
pipenv --python 3.11 pipenv install- Activate the environment:
pipenv shell
.6. Frontend Setup
- Navigate to the frontend directory and install dependencies:
npm install- Start the frontend server:
npm start
.7. Running the Project
- Follow the project's
README.md
for instructions on running both frontend and backend.Additional Notes
- Docker: If required by the project, Docker usage will be detailed in the project's documentation.
- Troubleshooting: Refer to the project's
README.md
or issues section for any setup issues or compatibility concerns.This guide provides an overview of setting up the OpenDevin project on Windows 11 using WSL with Ubuntu. Always refer to the project's official documentation for the most accurate and updated information.
What version of WSL are we running?
For reference when I start "sudo npm start" I get the following error.
**file:///home/lewy/OpenDevin/frontend/node_modules/vite/bin/vite.js:7
await import('source-map-support').then((r) => r.default.install())
^^^^^
SyntaxError: Unexpected reserved word
at Loader.moduleStrategy (internal/modules/esm/translators.js:133:18)
at async link (internal/modules/esm/module_job.js:42:21)**
( I needed to run sudo because there were certain files I couldn't access when doing "npm install")
Mine don't run the npm, always return that the npm command was not found or sudo was not found etc, and I`m using wsl not sure why is that any idea on how can I debug?
$ /bin/bash: line 1: npm: command not found
Try the following
sudo apt-get update
sudo apt install nodejs # This includes npm and nodejs
$ python -m pip install -r requirements.txt --force-reinstall
Processing /private/tmp/docutils-20231015-5350-1xd6hrn/docutils-0.20.1 (from -r requirements.txt (line 34))
ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/private/tmp/docutils-20231015-5350-1xd6hrn/docutils-0.20.1'
This is macOS Sonoma. I tried brew installed docker to no avail, then brew removed it and installed the official Docker Desktop. My theory is that the brew remove docker left something that is causing this error, anybody have a suggestion?
I get this error after "uvicorn opendevin.server.listen:app --port 3000": ModuleNotFoundError: No module named 'llama_index.vector_stores
I installed on WSL Ubuntu but this error I can't fix it so the backend doesn't start
(OpenDevin) (base) ubuntu@Peter:~/OpenDevin$ uvicorn opendevin.server.listen:app --port 3000 Traceback (most recent call last): File "/home/ubuntu/.local/share/virtualenvs/OpenDevin-Etc5k-cQ/bin/uvicorn", line 8, in <module> sys.exit(main()) ^^^^^^ File "/home/ubuntu/.local/share/virtualenvs/OpenDevin-Etc5k-cQ/lib/python3.11/site-packages/click/core.py", line 1157, in __call__ return self.main(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/.local/share/virtualenvs/OpenDevin-Etc5k-cQ/lib/python3.11/site-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) ^^^^^^^^^^^^^^^^ File "/home/ubuntu/.local/share/virtualenvs/OpenDevin-Etc5k-cQ/lib/python3.11/site-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, **ctx.params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/.local/share/virtualenvs/OpenDevin-Etc5k-cQ/lib/python3.11/site-packages/click/core.py", line 783, in invoke return __callback(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/.local/share/virtualenvs/OpenDevin-Etc5k-cQ/lib/python3.11/site-packages/uvicorn/main.py", line 409, in main run( File "/home/ubuntu/.local/share/virtualenvs/OpenDevin-Etc5k-cQ/lib/python3.11/site-packages/uvicorn/main.py", line 575, in run server.run() File "/home/ubuntu/.local/share/virtualenvs/OpenDevin-Etc5k-cQ/lib/python3.11/site-packages/uvicorn/server.py", line 65, in run return asyncio.run(self.serve(sockets=sockets)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/asyncio/runners.py", line 188, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/asyncio/runners.py", line 120, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete File "/home/ubuntu/.local/share/virtualenvs/OpenDevin-Etc5k-cQ/lib/python3.11/site-packages/uvicorn/server.py", line 69, in serve await self._serve(sockets) File "/home/ubuntu/.local/share/virtualenvs/OpenDevin-Etc5k-cQ/lib/python3.11/site-packages/uvicorn/server.py", line 76, in _serve config.load() File "/home/ubuntu/.local/share/virtualenvs/OpenDevin-Etc5k-cQ/lib/python3.11/site-packages/uvicorn/config.py", line 433, in load self.loaded_app = import_from_string(self.app) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/.local/share/virtualenvs/OpenDevin-Etc5k-cQ/lib/python3.11/site-packages/uvicorn/importer.py", line 22, in import_from_string raise exc from None File "/home/ubuntu/.local/share/virtualenvs/OpenDevin-Etc5k-cQ/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string module = importlib.import_module(module_str) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 1206, in _gcd_import File "<frozen importlib._bootstrap>", line 1178, in _find_and_load File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 940, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/home/ubuntu/OpenDevin/opendevin/server/listen.py", line 4, in <module> import agenthub # noqa F401 (we import this to get the agents registered) ^^^^^^^^^^^^^^^ File "/home/ubuntu/OpenDevin/agenthub/__init__.py", line 5, in <module> from . import monologue_agent # noqa: E402 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/OpenDevin/agenthub/monologue_agent/__init__.py", line 2, in <module> from .agent import MonologueAgent File "/home/ubuntu/OpenDevin/agenthub/monologue_agent/agent.py", line 28, in <module> from agenthub.monologue_agent.utils.memory import LongTermMemory File "/home/ubuntu/OpenDevin/agenthub/monologue_agent/utils/memory.py", line 5, in <module> from llama_index.vector_stores.chroma import ChromaVectorStore ModuleNotFoundError: No module named 'llama_index.vector_stores'
I had this one. I fixed it by installing something. I think I solved it by just pip install llama_index
I had to manually use pip to install basically everything in the dependencies file since the file wasn't working for me.
I think I solved it by just pip install llama_index
nothing change if i use pip install llama_index
I managed to get it running on Windows. This is the summary of the process by GPT4:
Comprehensive Setup Guide for OpenDevin Project on Windows 11
Prerequisites
Step-by-Step Guide
1. Setting Up WSL and Ubuntu
2. Installing Node.js
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
node --version
.3. Setting Up Python and Pipenv
python3 --version
.4. Cloning the OpenDevin Project
git clone https://github.com/OpenDevin/OpenDevin.git cd OpenDevin
5. Backend Setup with Pipenv
pipenv shell
.6. Frontend Setup
npm start
.7. Running the Project
README.md
for instructions on running both frontend and backend.Additional Notes
README.md
or issues section for any setup issues or compatibility concerns.This guide provides an overview of setting up the OpenDevin project on Windows 11 using WSL with Ubuntu. Always refer to the project's official documentation for the most accurate and updated information.