noauthority.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Long live NAS!

Administered by:

Server stats:

1.3K
active users

my one recommendation to anyone who wants to check out AI stuff: unless it's LLMs, do it on Linux. Windows does everything it can to make it a huge PITA because i am a child that doesn't know how to properly manage a computer, i guess.

I have half a mind to buy a datacenter GPU and rack a machine "in town" just to have a dedicated box for myself. I have 2 gpu boxes but 1070ti and 3060ti.
ok for smaller LLM and audio stuff but too low_vram for Stable diffusion or any decent LLM model

@picofarad I'm making images on windows right now with no problems
picofarad

@s8n FLux.1 Dev using a sampler other than euler, with controlnet?

Because i got SD 1.5 running on win 1.5 - flux-capable GUI suck.

i will not CLI generate anything, I'm not a poor

@picofarad idk what you're working with there but I'm running a webui called forge which is a fork of automatic 1111 and I'm using a model called illustrious and several supporting tools. I've uploaded a bunch of images the past few days that I think are pretty good
@picofarad I've used flux but it doesn't do much yet afaict except really good text and some interesting sentence parsing

@s8n
2022/10/25 elf A1111 sd1.5 model.bin
2023/02/11 tom sizemore A1111 some other model.safetensors

the rest are just what caught my eye in the output folders of various SD UI things. I really like img2img and controlnet, if forge does flux with CN and image to image... I'll buy ya a coffee.

missed doing this stuff, actually.

i use IRL photos a lot so i can't just dump my grids :-(

@picofarad @Nicktherat the fediverse is for asymmetrical communication dude it doesn't matter if you respond to me immediately or next week it's exactly the same
@picofarad @Nicktherat not asymmetrical what's the word for that... eh I forgot
@Nicktherat @picofarad asynchronous, don't smoke this much weed at once

@s8n @Nicktherat chatgpt is really good at this

when i actually want to communicate to people with degrees i use it to check i don't sound like a neanderthal

or as george costanza said "Humans, we think so highly of ourselves we named ourselves smart twice.
homo sapiens sapiens
" that was on criminal minds btw that was a good episode if someone edited it down to 20 minutes

asynchronous though (my brain, not a gpu)

@picofarad @Nicktherat you shouldn't worry about that, people who are highly intelligent are burdened by their intelligence equally to how you may be burdened by your lack thereof so that's the least necessary audience for that kind of thing. I only use big words and complex sentences when I'm trying to say something very specific
@picofarad @Nicktherat if you want to imagine it, think of how your life would be if you could understand everything that you can't but it was all infuriatingly retarded. That's basically how it is

@s8n @Nicktherat a plurality of shit i do understand is maddeningly retarded.

I can hand build an antenna to spec for any frequency and bandwidth (within reason) you could want.

How the f does RF radiate instead of just making heat? i got no fucking clue. I've read the books. I can build circuits. I cannot *design* circuits.

okay now explain in detail how electrons work in a wire. or a transistor. Or lightning.

At least there's music and decent moving pictures.

@picofarad @s8n @Nicktherat they don't work inside the wire generally. there's an electrostatic potential defined by the material and a magnetic barrier generated when the cables are powered and the electrons :agummyparty: across the surface within the barrier

(i think there is some action inside the metal but a lot of energy is actually just riding the surface; its called the skin effect i think?)

@icedquinn @Nicktherat @s8n my current brains model, the actual physical "in space and measureable" movement of individual electrons in a wire that has potential and current is very slow (shockingly slow, i think, like 1mph or 10mph, walkingly... slow?)

i am plagued with static electricity and i know it's a single polarity being built up near the surface but i cannot fathom the interactions that cause that to occur.

@icedquinn the analogies all break down. Nothing makes sense. If you have 1 unit of 20/2 wire in space and you apply 120VAC across it for a mile and put a lightbulb in series at t'other end

and turn it on and off how long does it takes for the lightbulb to shut off in seconds as defined by SI

@icedquinn i said wire but i both implied and meant to say "Romex™ or equivalent"

and "turn on for 1 SI second and then shut off and don't change state anymore"

apply voltage and current for 1 second, then off.

@s8n @Nicktherat i was subtly shilling for the live radio program i am gunna tuna in to in ten minutes

@s8n i use OG automatic1111 on the 3060 machine it works great but on this 3090 machine SDXL works with A1111, but i locked the venv - it works as-is, i don't touch it.

So if i wanted flux i had to install something - i found "automatic" - it's on github. I think the dev bit off more than they can handle - progress is slow and it feels "enterprise-y" like they want to get bought.

I check out forge. Does it support horde or whatever it's called?

@picofarad idk about horde, I've only used it for my use case which is generation of images of cartoons or animated characters, upscaling, in/outpainting, meme-making, and composite-building with gimp. I have one primary video card and I run a web browser full-screen on the one monitor that video card is connected to. It's a 24gb 3090. I also have my 3 other monitors on a 980ti and I surf and use the computer for other things while generating as a multitask.

forge supports flux but you probably want to start with sdxl, it's the most mature at present. Flux is not ready for prime time in my personal opinion unless you are generating simple images with text. It's amazing at putting text in images,this was made in flux
@picofarad in comparison, this was made in sdxl from that smaller source image

@s8n something i should note about the older models is textual embeddings and LoRA make a huge difference. I have an SD lora training workflow, i have, lets say too many great image datasets (iykwim) so i can take 15 gigs of images of a single person and either create a textual embedding that won't pass as them but still be detail AF, or train a 150MB lora model and it'll be as if i took the pictures with my nikon and anonymized them with photoshop.

with garbage SD models. even sd 1.5 original.

@s8n text embeddings are like 100kilobytes - it adds to the prompt in a way that doesn't affect the prompt size - other than the <foo:weight.> parameter *in* the prompt

wouldn't fit in prior

@picofarad I haven't used text embeddings at all since I switched to sdxl

@s8n yeah once i could train my own LoRA and discovered all the existing loras on civitai i haven't used em either

howeer, if i was building out a service to do this quickly "at scale" wtfever that means i would use textual embeddings and if pressed i probably have three models that are combined less than 10GB that i would ship

For elevator pitchable IP blocks i'd just use lora instead of embeddings, same otherwise.

@s8n oddly no one wants SD as a service everyone wants LLMs

I even had short clips working. i only share the trash ones, though where the AI changes the outfit every 15 frames, and the background every 22.

@picofarad I'm not very interested in video generation personally I think the stills are cooler. I disagree that nobody wants stable diffusion as a service, there are a bunch of users I know who would like to be able to access one. It's just not worth paying a fee to use and it's very expensive to run

@s8n i actually deployed fiber to a room in my house just to use a SD/LLM/whisper/demucs/spleeter host that lives in there on 1 defunct wifi connection for backhaul and 1 tenuous wifi connection for public access.

because it's winter. And AI generates roughly as much heat as crypto, and seems to make people other than me happier.

i'll ping you for access. If you know how to make it so i can't see anyone else's images that'd be super.

libsodium the png wrap all images in that and the users PK?

@s8n automatic at least on the backend "drops" vram into dram when it's not generating - at least with flux, i'll test other models in a moment.

So technically if i knew in advance how many concurrent users would be "on" and would fit cached in 64GB dram, create straw-users and spin a separate python instance in an encrypted folder - they get unique ports. Mine's port 8008 and i can just pre-set up 3 or 4 more "accounts"

The user'd have to clean up if they didn't want anyone else to see it

@s8n 64GB i can host 3 concurrent with tmpfs for ./outputs/ storage. So a reboot would wipe everything on storage encrypted or not.

so a user who knew me even ephemerally could request a reboot

I just gave myself reflux because i gotta automate starting sd webui(s) - actually rather than 3 of the same i can run a1111 that i already have with a small model and automatic, and ... forge?

See i could do horde which enjoins that computer to render for public users unauthed and ephemerally.

@s8n but f other people fr

@picofarad >they get unique ports
this is how I genned on dual 980tis, I ran two instances one for each video card

@s8n back when i was a cloud shouter for pay we called this "map/reduce" and our specific team charged ~$750k/yr takehome to know how to do it

@picofarad if you ever ran one in xl or ca you ran it on my hardware

@s8n nah local only but your resources were appreciated.

current immediate is 3090, 3060, 1070ti, and a few 1050ti. Next ring out (no cost to me) too many to list but i just pitched someone else doing the work as

"hey i have a great idea for HaaS - Heat as a Service"

@picofarad yeah I still have some of my favorite 1.5 models. I am very excited because I found an sdxl version of one of my favorite sd1.5 models today