Calling a windows web service from a WSL2 based application

Photo by Darragh ORiordan

If you are running stable diffusion via automatic1111 web ui implementation and you're calling inference through the web api from another application then you might have to configure networking between WSL2 and Windows. Here is how to do that.

Setting up the windows application

You should let the windows application listen to connections from anywhere (ensure your firewall is protecting the app from external connections)


# run the webui with api enabled and listening on all interfaces
.\webui.bat --api --listen

# open up your firewall to connections from the WSL2 host
New-NetFirewallRule -DisplayName "WSL bridge to windows" -InterfaceAlias "vEthernet (WSL)" -Direction Inbound -Protocol TCP -LocalPort 8545 -Action Allow

Setting up the application in WSL2

To call an api on the windows machine you cannot use "localhost" because that refers to the WSL2 host itself.

There is a preconfigured host setup for you in the WSL2 host that is name-of-windows-computer.local. You can get the name of your windows computer with WIN+X select `System and it should be right there at the top.

You can test if the name is correct with a curl from the WSL2 host

# e.g. for automatic1111 open api docs use:

curl http://dar-windows-machine.local:7860/docs

Conclusion

For me it's been faster to run AI inference directly in windows rather than trying to pass through WSL2 > Windows > Nvidia hardware.

But I prefer coding in Linux, so being able to trigger that inference from WSL2 is really handy.