First published at 03:28 UTC on September 23rd, 2022.
MORE
see code here
https://pastebin.com/0uCWSg7f
and here
https://pastebin.com/2U1Ym5qc
based off of stable diffusion https://github.com/lstein/stable-diffusion.git
The images are like this
{
"seed": 111,
"fn": ".\\outputsid-samples\\cube.-2022-9-22-804020\frames\\p29s2c0s25x512.png",
"prompt": "cube. block. wooden. spinning. rotating. bright red. low poly. 3d. letter a. alphabet. simple. focal length. ",
"data": {
"seed": 111,
"variation_amount": 0,
"width": 512,
"height": 512,
"cfg_scale": 7,
"steps": 25,
"iterations": 1,
}
}
Each parameter is scanned quickly at 25 iterations.
I backup and mutated the pytorch as follows:
```
modifications = []
eparams = []
eparam_names = []
for name, param in model.named_parameters():
param_names.append(name)
eparams.append(param)
backups = []
for mm in mutations:
vect = eparams[mm].data.cpu().detach().numpy()
eparams[mm].data += (mutation * torch.randn_like(eparams[mm]))
backups.append([mm, vect])
modifications.append([mm, mutation, eparam_names[mm], str(eparams[mm].size()), len(eparams)])
```
and after the render I restore them,
for p in backups:
eparams[p[0]].data = torch.from_numpy(p[1]).cuda()
LESS