Skip to content

Instantly share code, notes, and snippets.

@MSFTserver
Last active June 25, 2023 02:13
Show Gist options
  • Save MSFTserver/a05f637f32302918dd893318a4d9f62b to your computer and use it in GitHub Desktop.
Save MSFTserver/a05f637f32302918dd893318a4d9f62b to your computer and use it in GitHub Desktop.
guide to installing disco v5+ locally on windows

Install Disco Diffusion v5 for Windows

NOTE: Pytorch3d no longer has to be compiled i have stripped out the function we use to make this a lot easier and also so we do not have to use WSL2 with linux and can now run directly on your windows system.

Comments section is not checked often for issues please join the disco diffusion discord for assistance

https://discord.gg/mK4AneuycS

You may now use the official disco diffusion notebook with this tutorial as it has been uodated to reflect the changes here for better cross platform support

This will walk you through getting your environment setup to run most of the GANs with Anaconda as a Virtual Python Environment.

System Requirements:

  OS: Windows (11/10/8/7), Ubuntu(19,20)
  GPU: Nvidia (AMD hasnt been tested)
  VRAM: 12gb+

1) Download Tools!

  1. A) Cuda enabled GPU
  1. B) Python (Anaconda)
  1. C) Git
    • https://git-scm.com/downloads
    • version control manager for code
    • we just use it to download repos from GitHub
    • Must be on system PATH, When installing select the option add to system PATH
  1. D) FFmpeg
  1. E) ImageMagick
  1. F) Wget
    • used to download models for projects
    • Windows users need this verison
      • https://eternallybored.org/misc/wget/
      • (spoiler) Add to Path on Windows (spoiler)
      • download the .exe
      • create a new folder to put the .exe in (prefereable on the root of your C:/ drive)
        • e.g C:/wget/wget.exe
      • open Control Panel and search for environment variables
      • select the one with the shield icon Edit the system environment variables
      • click the button at the bottom Environment Virables...
      • under System variables find the and select the path variable
      • click Edit... button and then New
        • add the new path to folder the .exe is in
        • e.g C:/wget
      • once entered click Ok on each window until all 3 are closed
      • for the new env variables to be used you must reopen a new Command Prompt window
    • Linux users can just use the package in their distributions
  1. G) cURL
    • used to download models, some projects use this instead of wget
    • Latest versions of windows have cURL pre installed
    • Older versions that dont include cURL use this one
    • Linux users can just use the package in their distributions

2) Install Disco Dependencies

  1. A) Setup and activate conda env
    • conda create --name disco-diffusion python=3.9
    • conda activate disco-diffusion
  1. B) Install a few more pip dependencies
    • pip install ipykernel opencv-python pandas regex matplotlib ipywidgets
  1. C) Install Pytorch with CUDA support!
  1. D) Download disco diffusion repo!
    • git clone https://github.com/alembics/disco-diffusion.git
    • change directories into the downloaded repo
      • cd disco-diffusion

3) Run Disco Diffusion

there are several ways to run disco diffusion at this point:

1. [PYTHON .py]

plain python file wich means you will need to go into the file and manually find all the configuration options and change them as needed, an easy way to go about this is searching the document for #@param lines and edit ones containing that comment trailing the lines e.g. use_secondary_model = True #@param {type: 'boolean'}.

  • Run disco diffusion
    • python -m disco.py

2-3. [VS .py/.ipynb]

running the .ipynb file directly in VS also requires editing of the #@param lines in the code

4-5. [VS + Jupyter extension .py/.ipynb]

using the jupyter extension in VS we can get individual cell support to run the either the .ipynb or the .py file also requires editing of the #@param lines in the code

  • Download Visual Studio
  • Get the Jupyter Notebook extensions
    • head over the to the extensions tab in VS code
    • search for jupyter and install the one from Microsoft
    • after this is enabled the notebook should have several new toolbars and features
      • you can actually run both the .py file or .ipynb file which both support individual cells
    • ENJOY!

6. [Jupyter .ipynb]

using Jupyter notebooks to run the .ipynb file also requires editing of the #@param lines in the code

  • with anaconda installed you should already have jupyter notebook installed if you dont simple run:
    pip install jupyterlab
  • Run Jupyter
    • jupyter notebook
      • this launches the juptyer notebook in the current directory and open a webpage
    • under the Files tab double click and open the file named Disco-Diffusion.ipynb
    • ENJOY!

7. [Colab w/ Jupyter using colab links]

using Google Colab as a front end to get the nice view of all the editable fields while using Jupyter as middleware to connect your local resources

  • with anaconda installed you should already have jupyter notebook installed if you dont simple run:
    pip install jupyterlab
  • Connect to colab front end
    • pip install --upgrade jupyter_http_over_ws>=0.0.7
    • Enable the extension for jupyter
      • jupyter serverextension enable --py jupyter_http_over_ws
    • Start the jupyter server
      • jupyter notebook --NotebookApp.allow_origin='https://colab.research.google.com' --port=8888 --NotebookApp.port_retries=0 --no-browser
    • Inside google colab click the down Arrow icon next to the Connect button to view more options
    • Select connect to Local Runtime and enter the Local Host Url that was printed in the console when we started jupyter server
    • ENJOY!
@Pikobirb
Copy link

this is where I'm hitting a wall. I just am not a coder. I know what I need to do, but not how to do it. I have to define what folder models need to go into. Otherwise I'll have to go to each link in the code and manually put it in its folder.

NameError Traceback (most recent call last)
Input In [1], in <cell line: 129>()
126 print('First URL Failed using FallBack')
127 download_models(diffusion_model,use_secondary_model,True)
--> 129 download_models(diffusion_model,use_secondary_model)
131 model_config = model_and_diffusion_defaults()
132 if diffusion_model == '512x512_diffusion_uncond_finetune_008100':

Input In [1], in download_models(diffusion_model, use_secondary_model, fallback)
37 model_512_link_fb = 'https://huggingface.co/lowlevelware/512x512_diffusion_unconditional_ImageNet/resolve/main/512x512_diffusion_uncond_finetune_008100.pt'
38 model_secondary_link_fb = 'https://the-eye.eu/public/AI/models/v-diffusion/secondary_model_imagenet_2.pth'
---> 40 model_256_path = f'{model_path}/256x256_diffusion_uncond.pt'
41 model_512_path = f'{model_path}/512x512_diffusion_uncond_finetune_008100.pt'
42 model_secondary_path = f'{model_path}/secondary_model_imagenet_2.pth'

NameError: name 'model_path' is not defined

@chippwalters
Copy link

Few comments:

  • Install Anaconda on C: drive (use default) I installed with path vars Installed for just me. It didn't work and threw an error: KeyError('pkgs_dirs') so I went ahead and installed an older version of Anaconda and it worked. Needed to intall the jupyter from scratch as the instructions say as I didn't know if it was going to be installed with the older version. OLDER VERSION: Anaconda3-5.3.1-Windows-x86_64.exe from https://repo.anaconda.com/archive/
  • GIT installs correctly in Win 11 though you can't set the environment var
  • set the wget path var to "C:\wget" not C:/wget

when you say "setup and activate conda environment" you should be explicit which program to use to do that. Anaconda has two different command prompt apps. Use the regular and not the poweshell version.

@chippwalters
Copy link

Just a few more points about running local with Colab.
TO RESTART EACH TIME WITH COLAB NOTEBOOK IN BROWSER
Open Anaconda CMD Prompt (NOTE: If you installed PATH environment variables, you can open any CMD prompt)
(Ctrl + C cancels Jupyter, 'exit' quits CMD prompt)

Then paste and run:

conda activate disco-diffusion

then

cd disco-diffusion

then

jupyter notebook --NotebookApp.allow_origin='https://colab.research.google.com' --port=8888 --NotebookApp.port_retries=0 --no-browser

Then copy the localhost url from the CMD window which is provided for you (just mousedown, drag, CTRL + C)
It will look something like:

http://localhost:8888/?token=abb18c4f23817bfdd27b3368d36c603533c28c9cc550632e

Open the colab notebook and paste in the local connect field as per previous instructions
minimize the Anaconda Prompt (keep it running for the entirety of the session)
When done, CTRL + C to exit the Jupyter server and type exit to close the CMD window

@bkrabach
Copy link

IM having this issue too, trying to run locally, and it crashes because of that.

`

ModuleNotFoundError Traceback (most recent call last) in 67 try: ---> 68 from midas.dpt_depth import DPTDepthModel 69 except:
ModuleNotFoundError: No module named 'midas'
During handling of the above exception, another exception occurred:
FileNotFoundError Traceback (most recent call last) in 73 shutil.move('MiDaS/utils.py', 'MiDaS/midas_utils.py') 74 if not os.path.exists(f'{model_path}/dpt_large-midas-2f21e586.pt'): ---> 75 wget("https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt", model_path) 76 sys.path.append(f'{PROJECT_DIR}/MiDaS') 77
in wget(url, outputdir) 15 16 def wget(url, outputdir): ---> 17 res = subprocess.run(['wget', url, '-P', f'{outputdir}'], stdout=subprocess.PIPE).stdout.decode('utf-8') 18 print(res) 19
C:\Python36\lib\subprocess.py in run(input, timeout, check, *popenargs, **kwargs) 421 kwargs['stdin'] = PIPE 422 --> 423 with Popen(*popenargs, **kwargs) as process: 424 try: 425 stdout, stderr = process.communicate(input, timeout=timeout)
C:\Python36\lib\subprocess.py in init(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, encoding, errors) 727 c2pread, c2pwrite, 728 errread, errwrite, --> 729 restore_signals, start_new_session) 730 except: 731 # Cleanup if the child failed starting.
C:\Python36\lib\subprocess.py in _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_start_new_session) 1015 env, 1016 os.fspath(cwd) if cwd is not None else None, -> 1017 startupinfo) 1018 finally: 1019 # Child is launched. Close the parent's copy of those pipe
FileNotFoundError: [WinError 2] The system cannot find the file specified`

Did you solve this issue yet? I am having the same problem.

I solved this by going to the link to the github for the missing file, and uploading it into the models directory.

I checked and the file was already downloaded, but was receiving the same error. For me, the solution was to edit the import statements in the 1.4 Define Midas functions section to the following (added the MiDaS. prefix):

#@title ### 1.4 Define Midas functions

from MiDaS.midas.dpt_depth import DPTDepthModel
from MiDaS.midas.midas_net import MidasNet
from MiDaS.midas.midas_net_custom import MidasNet_small
from MiDaS.midas.transforms import Resize, NormalizeImage, PrepareForNet

@mrschneebly
Copy link

After downloading the models manually I now get this error:

Traceback (most recent call last):
  File "C:\Users\marti\disco-diffusion\disco.py", line 613, in <module>
    from infer import InferenceHelper
ModuleNotFoundError: No module named 'infer'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Program Files\Python37\lib\runpy.py", line 183, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
  File "C:\Program Files\Python37\lib\runpy.py", line 109, in _get_module_details
    __import__(pkg_name)
  File "C:\Users\marti\disco-diffusion\disco.py", line 619, in <module>
    wget("https://cloudflare-ipfs.com/ipfs/Qmd2mMnDLWePKmgfS8m6ntAg4nhV5VkUyAydYBp8cWWeB7/AdaBins_nyu.pt", f'{PROJECT_DIR}/pretrained')
  File "C:\Users\marti\disco-diffusion\disco.py", line 424, in wget
    res = subprocess.run(['wget', url, '-P', f'{outputdir}'], stdout=subprocess.PIPE).stdout.decode('utf-8')
  File "C:\Program Files\Python37\lib\subprocess.py", line 453, in run
    with Popen(*popenargs, **kwargs) as process:
  File "C:\Program Files\Python37\lib\subprocess.py", line 756, in __init__
    restore_signals, start_new_session)
  File "C:\Program Files\Python37\lib\subprocess.py", line 1155, in _execute_child
    startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified

I have no clue why it cant find a "infer" module... Please help!

@FJuri
Copy link

FJuri commented Jun 28, 2022

I had tons of errors too at first, check out the discord channel above, most issues have been discussed there.

in case it helps this is my code section in disco.py:

#Adabins stuff
if USE_ADABINS:
    try:
        from infer import InferenceHelper
    except:
        if not os.path.exists("AdaBins"):
            gitclone("https://github.com/shariqfarooq123/AdaBins.git")
        if not os.path.exists(f'{PROJECT_DIR}/pretrained/AdaBins_nyu.pt'):
            createPath(f'{PROJECT_DIR}/pretrained')
            wget("https://cloudflare-ipfs.com/ipfs/Qmd2mMnDLWePKmgfS8m6ntAg4nhV5VkUyAydYBp8cWWeB7/AdaBins_nyu.pt", f'{PROJECT_DIR}/pretrained')
        sys.path.append(f'{PROJECT_DIR}/AdaBins')
    from infer import InferenceHelper
    MAX_ADABINS_AREA = 500000

@mrschneebly
Copy link

No module named 'infer'

setting the PATH to wget solved it

@Tobe2d
Copy link

Tobe2d commented Jun 29, 2022

Any tips on using video_init_path on local drive? I try coping the path but it is not seeing it. as if there is no video, unlike on colab + gdrive it work directly. my issue with colab + local GPU and local Drive.

@Tobe2d
Copy link

Tobe2d commented Jul 1, 2022

the error is:

None
Frame 0 Prompt: ['detailed rusty steampunk helmet, with copper and leather elements, artstation']
Seed used: 3097663204

FileNotFoundError Traceback (most recent call last)
Input In [39], in <cell line: 217>()
216 torch.cuda.empty_cache()
217 try:
--> 218 do_run()
219 except KeyboardInterrupt:
220 pass

Input In [23], in do_run()
542 init = None
543 if init_image is not None:
--> 544 init = Image.open(fetch(init_image)).convert('RGB')
545 init = init.resize((args.side_x, args.side_y), Image.LANCZOS)
546 init = TF.to_tensor(init).to(device).unsqueeze(0).mul(2).sub(1)

Input In [23], in fetch(url_or_path)
73 fd.seek(0)
74 return fd
---> 75 return open(url_or_path, 'rb')

FileNotFoundError: [Errno 2] No such file or directory: '\\wsl.localhost\Ubuntu\home\username\disco-diffusion\init_images\Mandalorian.jpg'

@atom-unhinged
Copy link

Hi there. Thank you for this and I cant wait to use it but I just blew a week trying to get DD to run on WSL then WIndows without linux and with no luck. Ive tried everything and now countless repos. The furthest I get is to Diffuse then I get "transformation_fn" not defined error. But for this particular install i get FileNotFoundError: [WinError 2] The system cannot find the file specified

`---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Input In [3], in <cell line: 67>()
67 try:
---> 68 from midas.dpt_depth import DPTDepthModel
69 except:

ModuleNotFoundError: No module named 'midas'

During handling of the above exception, another exception occurred:

FileNotFoundError Traceback (most recent call last)
Input In [3], in <cell line: 67>()
73 shutil.move('MiDaS/utils.py', 'MiDaS/midas_utils.py')
74 if not os.path.exists(f'{model_path}/dpt_large-midas-2f21e586.pt'):
---> 75 wget("https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt", model_path)
76 sys.path.append(f'{PROJECT_DIR}/MiDaS')
78 try:

Input In [2], in wget(url, outputdir)
19 def wget(url, outputdir):
---> 20 res = subprocess.run(['wget', url, '-P', f'{outputdir}'], stdout=subprocess.PIPE).stdout.decode('utf-8')
21 print(res)

File ~\anaconda3\envs\disco-diffusion\lib\subprocess.py:505, in run(input, capture_output, timeout, check, *popenargs, **kwargs)
502 kwargs['stdout'] = PIPE
503 kwargs['stderr'] = PIPE
--> 505 with Popen(*popenargs, **kwargs) as process:
506 try:
507 stdout, stderr = process.communicate(input, timeout=timeout)

File ~\anaconda3\envs\disco-diffusion\lib\subprocess.py:951, in Popen.init(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, user, group, extra_groups, encoding, errors, text, umask)
947 if self.text_mode:
948 self.stderr = io.TextIOWrapper(self.stderr,
949 encoding=encoding, errors=errors)
--> 951 self._execute_child(args, executable, preexec_fn, close_fds,
952 pass_fds, cwd, env,
953 startupinfo, creationflags, shell,
954 p2cread, p2cwrite,
955 c2pread, c2pwrite,
956 errread, errwrite,
957 restore_signals,
958 gid, gids, uid, umask,
959 start_new_session)
960 except:
961 # Cleanup if the child failed starting.
962 for f in filter(None, (self.stdin, self.stdout, self.stderr)):

File ~\anaconda3\envs\disco-diffusion\lib\subprocess.py:1420, in Popen._execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_gid, unused_gids, unused_uid, unused_umask, unused_start_new_session)
1418 # Start the process
1419 try:
-> 1420 hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
1421 # no special security
1422 None, None,
1423 int(not close_fds),
1424 creationflags,
1425 env,
1426 cwd,
1427 startupinfo)
1428 finally:
1429 # Child is launched. Close the parent's copy of those pipe
1430 # handles that only the child should have open. You need
(...)
1433 # pipe will not close when the child process exits and the
1434 # ReadFile will hang.
1435 self._close_pipe_fds(p2cread, p2cwrite,
1436 c2pread, c2pwrite,
1437 errread, errwrite)

FileNotFoundError: [WinError 2] The system cannot find the file specified`

can you please tell me what it is im doing wrong or link me to some more info

FTR I can use colabs just fine but want to run locally. I can connect but encounter errors no matter how i try to run it.

Thank you

@ewanw58385
Copy link

ewanw58385 commented Jul 17, 2022

TO ANYONE WITH MISSING MODEL MIDAS/INFER FILES OR WGET PROBLEMS

  1. Type:

explorer.exe .

(^Including the . at the end) into the terminal to open the directory. You may notice you have a "Disco Diffusion" folder INSIDE a disco diffusion folder, with the child folder containing many of the same files.

  1. If you do have a duplicate folder, copy everything from the PARENT Disco Diffusion folder (except the duplicate folder), and paste the contents inside the disco diffusion duplicate/sub-folder. Replace all the files with the same name when prompted. (You probably could just redirect your terminal to the parent folder and delete the sub-folder, but I have no idea how to do that as I don't know how to code in a terminal).

  2. Delete the files you've just copied over from the parent folder to remove duplicates.

Run to check if this has resolved any issues. If not, then:

  1. The error messages for missing files should provide a URL to a download, where you can download the file it is looking for. Download this file (For missing MIDAS, the file is called "dpt_large_midas-". For missing INFER, I believe the file is called "AdaBins".

  2. Place the downloaded file within the correct folder in the directory we opened previously ^. The error message will specify what folder it is expecting the file to be in. Midas is in the Models folder, and AdaBins is in a folder called AdaBins.

Run again to check if this has resolved any issues

It should now be detecting the files correctly, but you may still be receiving an error where it cannot find the diffusion model. There is not a file to download or directory to check as this is checking WGET - that should have been installed previously.

  1. Instead of putting the WGET in a folder and adding the directory to the PATH enviroment variables (like I did), instead move the wget.exe file inside your System32 folder. I don't believe you need to set the PATH variable for this, and the .exe file should not be in a folder.

Hope this has helped someone!

@ShiroSora69
Copy link

ShiroSora69 commented Jul 23, 2022

The file for missing MIDAS (dpt_large-midas-2f21e586.pt) goes in the models folder
The file for missing INFER (AdaBins_nyu.pt) goes in the pretrained folder

The next error is copy-paste the wget.exe inside your System32 folder. (credits to guy above me)

@FREQ-EE
Copy link

FREQ-EE commented Jul 23, 2022

I finally got setup and running, but after generating the first frame I keep encountering this error and not able to find the solution. Any suggestion out here?

Frames: 0%
1/5000 [00:00<00:55, 90.89it/s]
translation_x: 0.0 translation_y: 0.0 translation_z: 10.0 rotation_3d_x: 0.0 rotation_3d_y: 0.0 rotation_3d_z: 1.25
translation: [-0.0, 0.0, -0.05]
rotation: [0.0, 0.0, 1.25]
rot_mat: tensor([[[ 0.9998, -0.0218,  0.0000],
         [ 0.0218,  0.9998,  0.0000],
         [ 0.0000,  0.0000,  1.0000]]], device='cuda:0')
Seed used: 120943660
---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
Input In [29], in <cell line: 217>()
    216 torch.cuda.empty_cache()
    217 try:
--> 218     do_run()
    219 except KeyboardInterrupt:
    220     pass

Input In [6], in do_run()
    407 else:
    408   img_filepath = '/content/prevFrame.png' if is_colab else 'prevFrame.png'
--> 410 next_step_pil = do_3d_step(img_filepath, frame_num, midas_model, midas_transform)
    411 next_step_pil.save('prevFrameScaled.png')
    413 ### Turbo mode - skip some diffusions, use 3d morph for clarity and to save time

Input In [6], in do_3d_step(img_filepath, frame_num, midas_model, midas_transform)
    313 rot_mat = p3dT.euler_angles_to_matrix(torch.tensor(rotate_xyz, device=device), "XYZ").unsqueeze(0)
    314 print("rot_mat: " + str(rot_mat))
--> 315 next_step_pil = dxf.transform_image_3d(img_filepath, midas_model, midas_transform, DEVICE,
    316                                         rot_mat, translate_xyz, args.near_plane, args.far_plane,
    317                                         args.fov, padding_mode=args.padding_mode,
    318                                         sampling_mode=args.sampling_mode, midas_weight=args.midas_weight)
    319 return next_step_pil

File ~\miniconda3\envs\disco-diffusion\lib\site-packages\torch\autograd\grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs)
     24 @functools.wraps(func)
     25 def decorate_context(*args, **kwargs):
     26     with self.clone():
---> 27         return func(*args, **kwargs)

File ~\disco-diffusion\disco_xform_utils.py:19, in transform_image_3d(img_filepath, midas_model, midas_transform, device, rot_mat, translate, near, far, fov_deg, padding_mode, sampling_mode, midas_weight, spherical)
     17 @torch.no_grad()
     18 def transform_image_3d(img_filepath, midas_model, midas_transform, device, rot_mat=torch.eye(3).unsqueeze(0), translate=(0.,0.,-0.04), near=2000, far=20000, fov_deg=60, padding_mode='border', sampling_mode='bicubic', midas_weight = 0.3,spherical=False):
---> 19     img_pil = Image.open(open(img_filepath, 'rb')).convert('RGB')
     20     w, h = img_pil.size
     21     image_tensor = torchvision.transforms.functional.to_tensor(img_pil).to(device)

FileNotFoundError: [Errno 2] No such file or directory: 'prevFrame.png'

@RyushoYosei
Copy link

Well Now I'm having this issue, i get all the way to the diffusion step, and it gives me this.


RuntimeError Traceback (most recent call last)

Input In [16], in <cell line: 204>()
205 model.load_state_dict(torch.load(custom_path, map_location='cpu'))
206 else:
--> 207 model.load_state_dict(torch.load(f'{model_path}/{get_model_filename(diffusion_model)}', map_location='cpu'))
208 model.requires_grad_(False).eval().to(device)
209 for name, param in model.named_parameters():

File ~\anaconda3\envs\disco-diffusion\lib\site-packages\torch\serialization.py:705, in load(f, map_location, pickle_module, **pickle_load_args)
700 if _is_zipfile(opened_file):
701 # The zipfile reader is going to advance the current file position.
702 # If we want to actually tail call to torch.jit.load, we need to
703 # reset back to the original position.
704 orig_position = opened_file.tell()
--> 705 with _open_zipfile_reader(opened_file) as opened_zipfile:
706 if _is_torchscript_zip(opened_zipfile):
707 warnings.warn("'torch.load' received a zip file that looks like a TorchScript archive"
708 " dispatching to 'torch.jit.load' (call 'torch.jit.load' directly to"
709 " silence this warning)", UserWarning)

File ~\anaconda3\envs\disco-diffusion\lib\site-packages\torch\serialization.py:242, in _open_zipfile_reader.init(self, name_or_buffer)
241 def init(self, name_or_buffer) -> None:
--> 242 super(_open_zipfile_reader, self).init(torch._C.PyTorchFileReader(name_or_buffer))

RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory


RuntimeError Traceback (most recent call last)

Input In [16], in <cell line: 204>()
205 model.load_state_dict(torch.load(custom_path, map_location='cpu'))
206 else:
--> 207 model.load_state_dict(torch.load(f'{model_path}/{get_model_filename(diffusion_model)}', map_location='cpu'))
208 model.requires_grad_(False).eval().to(device)
209 for name, param in model.named_parameters():

File ~\anaconda3\envs\disco-diffusion\lib\site-packages\torch\serialization.py:705, in load(f, map_location, pickle_module, **pickle_load_args)
700 if _is_zipfile(opened_file):
701 # The zipfile reader is going to advance the current file position.
702 # If we want to actually tail call to torch.jit.load, we need to
703 # reset back to the original position.
704 orig_position = opened_file.tell()
--> 705 with _open_zipfile_reader(opened_file) as opened_zipfile:
706 if _is_torchscript_zip(opened_zipfile):
707 warnings.warn("'torch.load' received a zip file that looks like a TorchScript archive"
708 " dispatching to 'torch.jit.load' (call 'torch.jit.load' directly to"
709 " silence this warning)", UserWarning)

File ~\anaconda3\envs\disco-diffusion\lib\site-packages\torch\serialization.py:242, in _open_zipfile_reader.init(self, name_or_buffer)
241 def init(self, name_or_buffer) -> None:
--> 242 super(_open_zipfile_reader, self).init(torch._C.PyTorchFileReader(name_or_buffer))

RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

@RyushoYosei
Copy link

..Fixed that, was working fine ealrier today, suddenly now it can't find the custom model it was using earlier, ..Nothing has changed, but now it suddenly -never- loads custom models...like it can't see any of it @_@

@pikok85
Copy link

pikok85 commented Aug 3, 2022

obraz
Hi. Having issues with optical flow maps generation. Previously had error importing from utils.utills, got that resolved by renaming utils.utils to raftutils.utils, but now I'm getting this. Could you help please?

@blanelts
Copy link

blanelts commented Aug 4, 2022

@pikok85 Hello. Were you able to solve the problem? I just have the same problem and don't know how to solve it.
image

@Acephalia
Copy link

Hello, thank you for the awesome work on this. I've managed to run all the clips without any errors but on the final diffuse I'm getting an error : TypeError: ddim_sample_loop_progressive() got an unexpected keyword argument 'transformation_fn'

Would anyone be able to shed some light? WOuld be much appreciated thank you!

Full error output :

Seed used: 1777865372
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Input In [19], in <cell line: 217>()
    216 torch.cuda.empty_cache()
    217 try:
--> 218     do_run()
    219 except KeyboardInterrupt:
    220     pass

Input In [6], in do_run()
    648     init = regen_perlin()
    650 if args.diffusion_sampling_mode == 'ddim':
--> 651     samples = sample_fn(
    652         model,
    653         (batch_size, 3, args.side_y, args.side_x),
    654         clip_denoised=clip_denoised,
    655         model_kwargs={},
    656         cond_fn=cond_fn,
    657         progress=True,
    658         skip_timesteps=skip_steps,
    659         init_image=init,
    660         randomize_class=randomize_class,
    661         eta=eta,
    662         transformation_fn=symmetry_transformation_fn,
    663         transformation_percent=args.transformation_percent
    664     )
    665 else:
    666     samples = sample_fn(
    667         model,
    668         (batch_size, 3, args.side_y, args.side_x),
   (...)
    676         order=2,
    677     )

TypeError: ddim_sample_loop_progressive() got an unexpected keyword argument 'transformation_fn'

@coltography
Copy link

Cell 1.3 I get this error:

ModuleNotFoundError                       Traceback (most recent call last)

Input In [4], in <cell line: 99>()
     97 from dataclasses import dataclass
     98 from functools import partial
---> 99 import cv2
    100 import pandas as pd
    101 import gc

ModuleNotFoundError: No module named 'cv2'

Anyone know how to get past this? I followed all instructions to a T. Re-did everything, still got the same error. No idea what could be wrong or different at this point.

@screan
Copy link

screan commented Dec 4, 2022

getting same cv2 error

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment