Created
January 11, 2023 15:03
-
-
Save takuma104/748c2d12abc4ce9111c1202b1f0efbc6 to your computer and use it in GitHub Desktop.
mem_eff_attention_deterministic_algorithms_warn.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import xformers | |
import xformers.ops | |
import torch | |
device = 'cuda' | |
dtype = torch.float16 | |
shape = (1, 1024, 16, 16) | |
torch.manual_seed(0) | |
q = torch.rand(shape, device=device, dtype=dtype, requires_grad=True) | |
k = torch.rand(shape, device=device, dtype=dtype, requires_grad=True) | |
v = torch.rand(shape, device=device, dtype=dtype, requires_grad=True) | |
torch.use_deterministic_algorithms(True, warn_only=True) | |
op = xformers.ops.MemoryEfficientAttentionCutlassOp | |
r = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=op) | |
r.backward(torch.ones_like(q)) | |
''' | |
The result should look something like this: | |
UserWarning: efficient_attention_forward_cutlass does not have a deterministic | |
implementation, but you set 'torch.use_deterministic_algorithms(True, warn_only=True)'. | |
You can file an issue at https://github.com/pytorch/pytorch/issues to help us | |
prioritize adding deterministic support for this operation. | |
(Triggered internally at /.../ATen/Context.cpp:82.) | |
UserWarning: mem_efficient_attention_backward_cutlass does not have a deterministic | |
implementation, but you set 'torch.use_deterministic_algorithms(True, warn_only=True)'. | |
You can file an issue at https://github.com/pytorch/pytorch/issues to help us | |
prioritize adding deterministic support for this operation. | |
(Triggered internally at /.../ATen/Context.cpp:82.) | |
''' |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment