After forever of trying to get stable diffusion to work with my new AMD gpu (6700XT) this is what finally fixed it for me. So glad to finally have stable diffusion back after switching off of Nvidia! (Credit to user DemiEngi on reddit for the fix)

First run

amdgpu-install

Before we continue, we have to add the repo for the ROCm version we’ll be using, this can be done by running

sudo add-apt-repository "deb https://repo.radeon.com/rocm/apt/5.2.5 ubuntu InRelease"
sudo apt-get update

This bit seems to be highly specific to this version, you can’t just change the number and have it work if you want to try newer versions of ROCm sadly

This script can install various bits and pieces on an as-needed basis, much unlike Windows where it’s one and done, the whole thing is modular on Linux. In my case I haven’t a fucking clue what’s necessary for Stable Diffusion and what isn’t, and don’t intend on spending ages reading up to see what’s what, so I install what I think is everything using this command

sudo amdgpu-install --rocmrelease=5.2.5 --usecase=graphics,multimedia,rocm,amf,lrt,opencl,hip,mllib,workstation

The --rocmrelease=5.2.5 part may change depending on what’s available at the time, I based my decision to use 5.2.5 on what PyTorch recommends here https://pytorch.org/get-started/locally/ at the time of writing the 1.13.1 version recommend ROCm 5.2, so I went with the latest revision to that, 5.2.5

Give yourself a reboot once this is done and make sure everything’s working okay, then run

rocminfo | grep 'Name'    

to make sure your GPU is being detected

Name: gfx1031
Marketing Name: AMD Radeon RX 6700 XT

should be somewhere in the output.

You’ll note it says gfx1031 in mine - technically the 6700XT isn’t usable with ROCm for some reason, but actually it is, so you run

export HSA_OVERRIDE_GFX_VERSION=10.3.0

to make the system lie about what GPU you have and boom, it just works. We’ll cover how to make this persistent further down if you want that.

Lastly you want to add yourself to the render and video groups using

sudo usermod -a -G render <YourUsernameHere>
sudo usermod -a -G video <YourUsernameHere>

3 - Install Python - this bit seems pretty straight forward, but in my case it wasn’t that clean cut, rocm depends on python2, but Stable Diffusion uses python3

sudo apt-get install python3    

then you want to edit your .bashrc file to make a shortcut (called an alias) to python3 when you type python - to do this you run

nano ~/.bashrc

or use your preferred text editor of choice, I’m not your boss. Add this

alias python=python3
export HSA_OVERRIDE_GFX_VERSION=10.3.0

to the bottom of the file, and now your system will default to python3 instead,and makes the GPU lie persistant, neat.

4 - Get AUTOMATIC1111

This step is fairly easy, we’re just gonna download the repo and do a little bit of setup. You wanna make sure you have git installed

sudo apt-get install git

then once you’ve got that you run

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
cd stable-diffusion-webui
python -m venv venv
source venv/bin/activate
python -m pip install --upgrade pip wheel

This clones the latest version, moves into that folder, makes and activates a virtual environment, then updates pip. This is where we stop with what’s on the AUTOMATIC1111 wiki and go our own way

5 - Install Torch, thankfully this bit is kinda easier, as the PyTorch website helpfully provided the exact command to run

pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2

With this version of Torch installed you’re almost done, but just to be safe check that you haven’t somehow got the non-ROCm version installed as happened to me using

pip list | grep 'torch'   

You should only see versions that say rocm5.2, if not uninstall the offending ones using

pip uninstall torch==<WrongVersionsHere>

6 - Running Stable Diffusion

At this point you should be basically good to go, for my specific gpu I’d recommend having medvram enabled, but you can probably get away without it if you’re sticking to single 512x512 images. I can get as far as 760x760 before it complains about a lack of vram. To run it you

python launch.py --precision full --no-half

or

python launch.py --precision full --no-half --medvram

if you wanna do big pictures.

This should cover just about everything, but if I’ve forgotten anything and you run into issues let me know and I’ll try to help and edit the post. Before we continue, we have to add the repo for the ROCm version we’ll be using, this can be done by running You can edit webui-user.sh and uncomment line 13

export COMMANDLINE_ARGS=“”

then add your arguments, i.e --precision full --no-half in to the quotation, so you only have to run webui-user.sh instead, if you prefer.