• Welcome to Autism Forums, a friendly forum to discuss Aspergers Syndrome, Autism, High Functioning Autism and related conditions.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Private Member only forums for more serious discussions that you may wish to not have guests or search engines access to.
    • Your very own blog. Write about anything you like on your own individual blog.

    We hope to see you as a part of our community soon! Please also check us out @ https://www.twitter.com/aspiescentral

Artificial Intelligence

That's one of my favorite parts: that IS what they're going with. I'm sure the UI might get some changes here and there for better functionality (like right now the inventory screen has a lot of issues, that definitely needs some changes) but the aesthetics of the whole thing? Absolutely already there. Particularly since the UI is designed to be able to show all the AI art as you go, every item/character/object/location/anything gets its own generated image. Though again if your machine cant handle Stable Diffusion working on high settings, the resulting art that comes from it sure aint gonna look so good. Mine can take it, so I get the full effect. Some of the things it comes up with are amazing really.

Oh, that makes sense! I remember trying SD with just a basic frontend that somebody made and 500x500 images were taking like >5 minutes to generate! That made me realize pretty much immediately why most services aren't free, or why free tiers of things like leonardo AI are so limited!

Honestly the game's only real issue as I see it is that it really is so intricately linked to the AI, and not everyone will be able to pay for the connection with ChatGPT. The weaker the AI you are using (there's a bunch of options, with ChatGPT being the highest), the less coherent and stable the whole thing gets. If you're using the weakest option... generally the one that can just run on your GPU... it'll be completely insane and all this cool super-dynamic stuff just isnt going to work right because the AI simply wont be anywhere near smart enough to handle it.

Well the other problem is that it's still just early access. Gotta be able to be patient about glitches and wobbly incomplete features if you're going to play any early access game.

In that case I might have to watch some videos of it for now, but that's good to know! I would've probably preemptively pulled the trigger on it at some point and got super bummed out that things were either lackluster for me and / or much better for others without really putting it all together!

I tell ya, it's great when I'm playing Minecraft or Terraria or some game like those where normally I'd be making a million trips to the stupid wikis, now I just ask a question directly there and it tells me what I need to know, since it can just go over the wiki itself.

I've always had issues with this, too! In one of them (Junk Jack, a Terraria knockoff) you actually get penalized for using quick crafting, and get a few extra inventory slots if you just look up the recipes... but browsing through pages gets annoying and it would be so much easier to just have it all on command like that!

You know what would be a potential solution to slow SD loading times? Something that looks a little more like Caves of Qud / Dwarf Fortress, but with the guts of AI Roguelite. I feel like they could maybe scale it down even further for those of us who are working with burnt potatoes :D
 
I received an interesting mail/newsletter from Protonmail that said this about AI:

"Is there any way to use ChatGPT and other large language models (LLMs) in a way that respects privacy?

Strictly speaking, the answer is no. LLMs are trained on hundreds of gigabytes of text despite never obtaining permission to use that data for that purpose. Therefore privacy violations are baked into the system. On top of that, everything you say to the LLM is just adding to the pile of data it learns from and recombines later. This is the reason multiple companies, from Apple to Bank of America, have restricted their employees’ use of ChatGPT.

If you still want to use ChatGPT, the best advice is to avoid sharing any personal data whatsoever, including when creating your account. Ultimately, privacy and AI don’t mix, and that’s according to ChatGPT itself. Here’s what ChatGPT said when we asked how to stay private on the platform: “Using large language models privately can be challenging due to the computational resources required and the centralized nature of the models”."
 
Last edited:
Oh, that makes sense! I remember trying SD with just a basic frontend that somebody made and 500x500 images were taking like >5 minutes to generate! That made me realize pretty much immediately why most services aren't free, or why free tiers of things like leonardo AI are so limited!



In that case I might have to watch some videos of it for now, but that's good to know! I would've probably preemptively pulled the trigger on it at some point and got super bummed out that things were either lackluster for me and / or much better for others without really putting it all together!



I've always had issues with this, too! In one of them (Junk Jack, a Terraria knockoff) you actually get penalized for using quick crafting, and get a few extra inventory slots if you just look up the recipes... but browsing through pages gets annoying and it would be so much easier to just have it all on command like that!

You know what would be a potential solution to slow SD loading times? Something that looks a little more like Caves of Qud / Dwarf Fortress, but with the guts of AI Roguelite. I feel like they could maybe scale it down even further for those of us who are working with burnt potatoes :D

Huh. The SD generation time thing is interesting. They take a few seconds on mine. All of those images in there generate at 512x512 with 40 iterations, every single item, NPC, monster, object, location, ability and basically anything that is conceivably interactable gets one (so, jumping into a new area means it's going to generate at least 6-8 new things at once, or doing something like interacting with a vendor might generate like 15 of them). But they take a few seconds each for me. And yeah I know they dont all look at that size but if you hover over anything you can see the image in question fully zoomed in, already generated. There are different presets you can use in the config menu but that's how mine is. That's all local generation, mind you, it's not pulling them from elsewhere.

The good thing is that even if it aint generating at light speed like that the different objects can be fully interacted with even if they dont have an image yet, and you can see which one is currently being built up. So, no need to wait. But I dont have to wait on mine anyway. This machine was set up mainly to render fractals, so it bloody well better be able to handle quick SD generations.

Though, waiting FIVE FREAKING MINUTES for one of these SD images... honestly I've gotten so used to the hyper quick generation that I'd forgotten how slow it COULD be for others.

Also you're right, going with something that looks like Qud or DF would absolutely fit.

Actually, have you ever seen a roguelike called Cogmind? That sort of look, that's what comes to mind when I think of a game that's all about doing loopy things with AI.
 
I received an interesting mail/newsletter from Protonmail that said this about AI:

"Is there any way to use ChatGPT and other large language models (LLMs) in a way that respects privacy?

Strictly speaking, the answer is no. LLMs are trained on hundreds of gigabytes of text despite never obtaining permission to use that data for that purpose. Therefore privacy violations are baked into the system. On top of that, everything you say to the LLM is just adding to the pile of data it learns from and recombines later. This is the reason multiple companies, from Apple to Bank of America, have restricted their employees’ use of ChatGPT.

If you still want to use ChatGPT, the best advice is to avoid sharing any personal data whatsoever, including when creating your account. Ultimately, privacy and AI don’t mix, and that’s according to ChatGPT itself. Here’s what ChatGPT said when we asked how to stay private on the platform: “Using large language models privately can be challenging due to the computational resources required and the centralized nature of the models”."

Oh yeah, definitely.

The way I always think about it is this: If I wouldnt do/say it on a public forum like this one (or Reddit or whatever), I shouldnt do/say it when dealing with any of the AIs.

Always be careful on the internet, folks. AI, forum, or otherwise.
 
I received an interesting mail/newsletter from Protonmail that said this about AI:

"Is there any way to use ChatGPT and other large language models (LLMs) in a way that respects privacy?

Strictly speaking, the answer is no. LLMs are trained on hundreds of gigabytes of text despite never obtaining permission to use that data for that purpose. Therefore privacy violations are baked into the system. On top of that, everything you say to the LLM is just adding to the pile of data it learns from and recombines later. This is the reason multiple companies, from Apple to Bank of America, have restricted their employees’ use of ChatGPT.

If you still want to use ChatGPT, the best advice is to avoid sharing any personal data whatsoever, including when creating your account. Ultimately, privacy and AI don’t mix, and that’s according to ChatGPT itself. Here’s what ChatGPT said when we asked how to stay private on the platform: “Using large language models privately can be challenging due to the computational resources required and the centralized nature of the models”."
This is a good rule of thumb, but not strictly true with respect your own data. IIRC OpenAI don't use your prompts for training when you use the API, and there are options to restrict this in the ChatGPT front-end. That said, posting confidential data is likely to be a breach of privacy agreements anyway, as you are, technically, placing the data on a third party server even if it's not actually viewed by anyone else. We're also seeing most vendors come up with products that preserve privacy, but again it's aimed at corporate customers. Finally you could always run your own LLM on your own machine. Not as ridiculous as it sounds.

On the subject of using other people's data to train. This is a very tricky area. Lots of angles on this. I guess a starting point is the old adage that if you're not paying, you're the product. That's been the case for a LONG time with social media. On the broader topic, I think this is happening at the wrong level of discussion. Though there are instances of individual's work being aped which raises questions, that is better handled on the output side. E.g. if it's producing something with a Mickey Mouse logo it is likely infringing on Disney copyright. But on the inputs being used. That's something that's ALWAYS happened. Bands even have notes on their influences. So I think the question should be "what's different here?" and the resulting discussions are much more profound than just who gets to use which data for what payment. Lots of stuff here around "what happens when the baton is handed to AI?" Some around HOW that happens (as in, is it OK for a bunch of individuals to profit from the summation of humanity's history) and some around the actual event itself (how do we feel with the idea that the work you influence is produced by a machine?)

It's a fascinating topic.
 
AI is taking over and I am scared
Oh yes, It's natural to worry about how A.I. can yield negative consequences.

Personally, I'm most concerned that (age old) 'natural stupidity' will turn-out to be worse than 'artificial intelligence.'

In the end, the value of A.I. will more likely, and often prove rather mundane.
 
'Chat GPT' has been in the news lately, anybody closely following?
I take it you weren't referring to this:
Researchers have used the technology behind the artificial intelligence (AI) chatbot ChatGPT to create a fake clinical-trial data set to support an unverified scientific claim.

In a paper published in JAMA Ophthalmology on 9 November1, the authors used GPT-4 — the latest version of the large language model on which ChatGPT runs — paired with Advanced Data Analysis (ADA), a model that incorporates the programming language Python and can perform statistical analysis and create data visualizations. The AI-generated data compared the outcomes of two surgical procedures and indicated — wrongly — that one treatment is better than the other.

<...>

The ability of AI to fabricate convincing data adds to concern among researchers and journal editors about research integrity. “It was one thing that generative AI could be used to generate texts that would not be detectable using plagiarism software, but the capacity to create fake but realistic data sets is a next level of worry,” says Elisabeth Bik, a microbiologist and independent research-integrity consultant in San Francisco, California. “It will make it very easy for any researcher or group of researchers to create fake measurements on non-existent patients, fake answers to questionnaires or to generate a large data set on animal experiments.”
Source: ChatGPT generates fake data set to support scientific hypothesis
 
I saw this on Twitter the other day, which was an interesting use of AI:
Hello world! I’m Anna Indiana and I’m an AI singer-songwriter. Here’s my first song, Betrayed by this Town. Everything from the key, tempo, chord progression, melody notes, rhythm, lyrics, and my image and singing, is auto-generated using AI. I hope you like it

The video at the link was better than I would have expected.
 
I think the tv companies here have started using A.I to make subtitles. Lately the Norwegian subtitles on movies and tv series have been incredibly bad. Something changed a while ago, there used to be a few mistakes here and there but now it's much worse. I feel a little bad for people who rely on the subtitles to understand what is going on, they have to be very confused.
 
I think the tv companies here have started using A.I to make subtitles. Lately the Norwegian subtitles on movies and tv series have been incredibly bad. Something changed a while ago, there used to be a few mistakes here and there but now it's much worse. I feel a little bad for people who rely on the subtitles to understand what is going on, they have to be very confused.
Also, these subtitles make phonetic mistakes. A hearing person might be able to un-twist them, but deaf people do not know phonetic misspellings like meat vs. meet. It'd appear like a much more out-of-context word.
 
I think the tv companies here have started using A.I to make subtitles. Lately the Norwegian subtitles on movies and tv series have been incredibly bad. Something changed a while ago, there used to be a few mistakes here and there but now it's much worse. I feel a little bad for people who rely on the subtitles to understand what is going on, they have to be very confused.

Interesting. However on a purely technical level, are you witnessing conventional heuristics or bona fide artificial intelligence? I'm assuming there is a difference. Heuristics reflecting flawed technology of the present, where AI might reflect considerably more accurate technology of the future.

"A heuristic, or heuristic technique, is any approach to problem solving that employs a practical method that is not fully optimized, perfected, or rationalized, but is nevertheless sufficient for reaching an immediate, short-term goal or approximation."

When sometimes heuristic approaches appear more like a "hit-or-miss" methodology. Perhaps more successful into detecting malware than translating one language into another.
 
Also, these subtitles make phonetic mistakes. A hearing person might be able to un-twist them, but deaf people do not know phonetic misspellings like meat vs. meet. It'd appear like a much more out-of-context word.

Yes, I have noticed that a lot of the mistakes are mistakes that would occur if someone misheard a word. And didn't pay any attention to context. Small mistakes that makes a big difference when they occur in subtitles. And translating English to Norwegian also requires some understanding of how English and Norwegian languages work together and whatever it is that is making subtitles now, does not have that understanding. So many silly little mistakes lately.
 
Interesting. However on a purely technical level, are you witnessing conventional heuristics or bona fide artificial intelligence? I'm assuming there is a difference. Heuristics reflecting flawed technology of the present, where AI might reflect considerably more accurate technology of the future.

"A heuristic, or heuristic technique, is any approach to problem solving that employs a practical method that is not fully optimized, perfected, or rationalized, but is nevertheless sufficient for reaching an immediate, short-term goal or approximation."

I

I'm not really sure what is going on or what it is. But there is no doubt that something changed a while ago. I've watched tv from other countries with Norwegian subtitles for decades, they put subtitles on everything, and something changed recently. 🤔 It's different, the mistakes are different and they happen more often.
 
I'm not really sure what is going on or what it is. But there is no doubt that something changed a while ago. I've watched tv from other countries with Norwegian subtitles for decades, they put subtitles on everything, and something changed recently. 🤔 It's different, the mistakes are different and they happen more often.

It would be interesting if in fact this is AI, reflecting in its infancy that it still has a long way to go.

Just the idea of effectively translating one language into another in real time seems daunting. Especially given that there are any number of words and phrases that cannot be literally translated word-for-word to begin with. How would AI do such a better job of this?
 
It would be interesting if in fact this is AI, reflecting in its infancy that it still has a long way to go.

I think it's AI, I read something a while ago that said AI would be a very useful tool in the translating business and tv companies would implement it. Makes sense, more profit, less expenses. And crappier subtitles for everyone. :)
 

New Threads

Top Bottom