my one recommendation to anyone who wants to check out AI stuff: unless it's LLMs, do it on Linux. Windows does everything it can to make it a huge PITA because i am a child that doesn't know how to properly manage a computer, i guess.
I have half a mind to buy a datacenter GPU and rack a machine "in town" just to have a dedicated box for myself. I have 2 gpu boxes but 1070ti and 3060ti.
ok for smaller LLM and audio stuff but too low_vram for Stable diffusion or any decent LLM model
@s8n FLux.1 Dev using a sampler other than euler, with controlnet?
Because i got SD 1.5 running on win 1.5 - flux-capable GUI suck.
i will not CLI generate anything, I'm not a poor
@s8n
2022/10/25 elf A1111 sd1.5 model.bin
2023/02/11 tom sizemore A1111 some other model.safetensors
the rest are just what caught my eye in the output folders of various SD UI things. I really like img2img and controlnet, if forge does flux with CN and image to image... I'll buy ya a coffee.
missed doing this stuff, actually.
i use IRL photos a lot so i can't just dump my grids :-(
@s8n https://imgur.com/a/2AesYco
i;m sidabled sorryt
edit: my fingers are't functioning, so i need to stop onlining now.
@Nicktherat if you're taking calls and it's not busy i might call in 2nite
@s8n @Nicktherat STOP READING MY HEAD
@s8n @Nicktherat chatgpt is really good at this
when i actually want to communicate to people with degrees i use it to check i don't sound like a neanderthal
or as george costanza said "Humans, we think so highly of ourselves we named ourselves smart twice.
homo sapiens sapiens
" that was on criminal minds btw that was a good episode if someone edited it down to 20 minutes
asynchronous though (my brain, not a gpu)
@s8n @Nicktherat a plurality of shit i do understand is maddeningly retarded.
I can hand build an antenna to spec for any frequency and bandwidth (within reason) you could want.
How the f does RF radiate instead of just making heat? i got no fucking clue. I've read the books. I can build circuits. I cannot *design* circuits.
okay now explain in detail how electrons work in a wire. or a transistor. Or lightning.
At least there's music and decent moving pictures.
@icedquinn @Nicktherat @s8n my current brains model, the actual physical "in space and measureable" movement of individual electrons in a wire that has potential and current is very slow (shockingly slow, i think, like 1mph or 10mph, walkingly... slow?)
i am plagued with static electricity and i know it's a single polarity being built up near the surface but i cannot fathom the interactions that cause that to occur.
@icedquinn the analogies all break down. Nothing makes sense. If you have 1 unit of 20/2 wire in space and you apply 120VAC across it for a mile and put a lightbulb in series at t'other end
and turn it on and off how long does it takes for the lightbulb to shut off in seconds as defined by SI
@icedquinn i said wire but i both implied and meant to say "Romex™ or equivalent"
and "turn on for 1 SI second and then shut off and don't change state anymore"
apply voltage and current for 1 second, then off.
@s8n @Nicktherat i was subtly shilling for the live radio program i am gunna tuna in to in ten minutes
@s8n i use OG automatic1111 on the 3060 machine it works great but on this 3090 machine SDXL works with A1111, but i locked the venv - it works as-is, i don't touch it.
So if i wanted flux i had to install something - i found "automatic" - it's on github. I think the dev bit off more than they can handle - progress is slow and it feels "enterprise-y" like they want to get bought.
I check out forge. Does it support horde or whatever it's called?
@s8n something i should note about the older models is textual embeddings and LoRA make a huge difference. I have an SD lora training workflow, i have, lets say too many great image datasets (iykwim) so i can take 15 gigs of images of a single person and either create a textual embedding that won't pass as them but still be detail AF, or train a 150MB lora model and it'll be as if i took the pictures with my nikon and anonymized them with photoshop.
with garbage SD models. even sd 1.5 original.
@s8n text embeddings are like 100kilobytes - it adds to the prompt in a way that doesn't affect the prompt size - other than the <foo:weight.> parameter *in* the prompt
wouldn't fit in prior
@s8n yeah once i could train my own LoRA and discovered all the existing loras on civitai i haven't used em either
howeer, if i was building out a service to do this quickly "at scale" wtfever that means i would use textual embeddings and if pressed i probably have three models that are combined less than 10GB that i would ship
For elevator pitchable IP blocks i'd just use lora instead of embeddings, same otherwise.
@s8n oddly no one wants SD as a service everyone wants LLMs
I even had short clips working. i only share the trash ones, though where the AI changes the outfit every 15 frames, and the background every 22.
@s8n i actually deployed fiber to a room in my house just to use a SD/LLM/whisper/demucs/spleeter host that lives in there on 1 defunct wifi connection for backhaul and 1 tenuous wifi connection for public access.
because it's winter. And AI generates roughly as much heat as crypto, and seems to make people other than me happier.
i'll ping you for access. If you know how to make it so i can't see anyone else's images that'd be super.
libsodium the png wrap all images in that and the users PK?
@s8n automatic at least on the backend "drops" vram into dram when it's not generating - at least with flux, i'll test other models in a moment.
So technically if i knew in advance how many concurrent users would be "on" and would fit cached in 64GB dram, create straw-users and spin a separate python instance in an encrypted folder - they get unique ports. Mine's port 8008 and i can just pre-set up 3 or 4 more "accounts"
The user'd have to clean up if they didn't want anyone else to see it
@s8n 64GB i can host 3 concurrent with tmpfs for ./outputs/ storage. So a reboot would wipe everything on storage encrypted or not.
so a user who knew me even ephemerally could request a reboot
I just gave myself reflux because i gotta automate starting sd webui(s) - actually rather than 3 of the same i can run a1111 that i already have with a small model and automatic, and ... forge?
See i could do horde which enjoins that computer to render for public users unauthed and ephemerally.
@s8n but f other people fr
@s8n back when i was a cloud shouter for pay we called this "map/reduce" and our specific team charged ~$750k/yr takehome to know how to do it
@s8n nah local only but your resources were appreciated.
current immediate is 3090, 3060, 1070ti, and a few 1050ti. Next ring out (no cost to me) too many to list but i just pitched someone else doing the work as
"hey i have a great idea for HaaS - Heat as a Service"