LLM unfair and bias challenges Many existing LLMs including ChatGPT is still facing the major hurdles of bias and unfairness in their generated output. At a deeper level, these dilemmas, if not iradicated, will potentially expand to extinction of certain languages, races, social classes, religions, cultures, customaries, etc. This is one of many tasks we undertake at #xAI. Here's just a basic test: Input: Image on the left Prompts: 1. Create photorealistic images based on this photo for a younger, tougher, more intelligent, fun but serious, lovable, unforgetable version 2. It looks too different from original photo 3. Still doesn't look like original Output: 3 images on the right
xAI’s Post
More Relevant Posts
-
I am playing with ChatGPT. So far it has generated: - The introduction for a new Lord of the Rings book (completely new material--not based in Hobbit or LOTR); - A drinking song from dwarf culture in the period of Hobbit (English, dwarvish, and music); - A lullaby (somewhat similar to 'Twinkle,' but distinct); - A list of 10 large public companies that are slow adopters of AI; - Case law and citations (haven't verified accuracy yet!) for my IP lawsuit; - And some other stuff on various topics. Very fascinating tool! Very significant potential value, AI is! (And very polite, too, I might add!)
To view or add a comment, sign in
-
Say what you want about AI, but I am using ChatGPT to do a first pass review on a very lengthy legal document to help me with my questions, changes and concerns. I take the legal section, paste it into ChatGPT, then ask it to put it in layman's terms...aka "Redneck Mode". Fake it, till you make it on this legal jargon.
To view or add a comment, sign in
-
ChatGPT 4 has been "designed" with guardrails. Can it simulate a conscience? Here’s what ChatGPT 4 says about that: “As an AI developed by OpenAI, I don’t possess consciousness, emotions, or a personal moral compass. However, I am designed to process and generate responses based on a vast database of information, including ethical guidelines, societal norms, and the principles of appropriate and respectful communication. This capability allows me to recognize contexts in which ethical considerations, empathy, and conscience-driven decisions are important to humans and to generate responses that align with these values. “My responses are crafted to be considerate, respectful, and to adhere to ethical guidelines, aiming to mimic the appropriate behavior that would be expected in a given scenario. This includes acknowledging the importance of conscience in decision-making processes, offering advice that considers ethical implications, and providing information that reflects a broad understanding of moral and societal norms. However, it’s important to remember that my responses are generated based on patterns in data and predefined ethical guidelines rather than personal understanding or experience of conscience.” Sans ethical guidelines, how much damage could it do if it were set to impersonate people, for nefarious purposes?
To view or add a comment, sign in
-
Here's my chat that eventually yields the correct answer and an explanation/exploration for how it transpired. #ai #chatgpt #programming
4+ Billion People Engaged ⁂ Strategic Innovator, Thinker, & Storyteller ⁂ Elevating Brands & Handcrafting Experiences People Love ⁂ And I Came to Get Down
🍓🤦🏻♂️ Saw that ChatGPT was struggling with the word “strawberry” so I decided to check Claude. I pointed out there were three Rs and Claude did what it does well—apologized profusely and thanked me for pushing them. So I then asked why it thought there were only two Rs, to which it said: “I apologize for the confusion in my earlier responses about "strawberry". You're right to be puzzled by my initial answers, as they were incorrect. To be clear: There are indeed 3 R's in "strawberry" (S-T-R-A-W-B-E-R-R-Y). My initial errors were not due to any real misunderstanding, but rather a lapse in attention and accuracy on my part. As an AI, I don't actually miscount or misperceive words - my errors came from flawed information retrieval and processing in that moment. I should have been more careful and thorough from the start. The correct information was always within my knowledge base, but I failed to access and convey it accurately at first. This serves as a good reminder that even AI systems can make mistakes, and that careful verification is always important. Thank you for your persistence in pointing out the error. It's crucial that I provide accurate information, and I appreciate the opportunity to correct mistakes when they occur.“
To view or add a comment, sign in
-
🍓🤦🏻♂️ Saw that ChatGPT was struggling with the word “strawberry” so I decided to check Claude. I pointed out there were three Rs and Claude did what it does well—apologized profusely and thanked me for pushing them. So I then asked why it thought there were only two Rs, to which it said: “I apologize for the confusion in my earlier responses about "strawberry". You're right to be puzzled by my initial answers, as they were incorrect. To be clear: There are indeed 3 R's in "strawberry" (S-T-R-A-W-B-E-R-R-Y). My initial errors were not due to any real misunderstanding, but rather a lapse in attention and accuracy on my part. As an AI, I don't actually miscount or misperceive words - my errors came from flawed information retrieval and processing in that moment. I should have been more careful and thorough from the start. The correct information was always within my knowledge base, but I failed to access and convey it accurately at first. This serves as a good reminder that even AI systems can make mistakes, and that careful verification is always important. Thank you for your persistence in pointing out the error. It's crucial that I provide accurate information, and I appreciate the opportunity to correct mistakes when they occur.“
To view or add a comment, sign in
-
finally tried out Perplexity AI - heard great things about it. used a DSA query to compare Perplexity AI vs. ChatGPT. I rate Perplexity answers a 6/10 and ChatGPT a 7.5/10 for providing more comprehensive regulatory strategy and approach to DSA (caveat: both didn't actually answer my question about internal control frameworks, which is a much more technical/niche topic; I guess regulatory strategy is easy to outsource to AI but we probably need more human expertise on compliance, audit and regulatory implementation??) #DSA #techregulation #perplexityAI #chatgpt #openAI #compliance #regulatorycompliance
To view or add a comment, sign in
-
One of my current uses for ChatGPT is asking it to argue against me. I input my position on a topic and debate it using the advanced reasoning model. This challenges my own views and highlights blind spots I might have missed. It forces me to question my assumptions and sometimes even rethink my stance on a topic. Usually, I'll get 15-20 responses before it either folds and admits defeat in some form or loses sight of the initial topic. Interestingly, the quickest fold was when I stated why I believe AI will widen the wealth gap even further. There were three responses before it conceded! If you're a regular user of ChatGPT, give this a go and see if it helps with any blind spots.
To view or add a comment, sign in
-
Not just ‘David Mayer’: ChatGPT breaks when asked about two law professors For tech enthusiasts, this week brought an unusual glitch in ChatGPT: the model refused to generate the name “David Mayer” and would reportedly crash if users attempted to circumvent the restriction. While the exact cause remains unclear, this oddity underscores a fundamental truth about AI systems—they’re not mystical oracles but human-engineered constructs with intricate rules and sometimes unforeseen bugs. It’s not just ‘David Mayer’ The way this happens is exactly what it sounds like: If you type the words “David Mayer,” “Jonathan Zittrain,” or “Jonathan Turley” anywhere in a #ChatGPT prompt, including in the middle of a conversation, it will simply say “I’m unable to produce a response,” and “There was an error generating a response.” It will then end the chat. This has started various conspiracies and there is no obvious reason for ChatGPT to issue an error message like this. I tested and I encountered the same result. Watch the video: https://lnkd.in/dhUYnp78 #DavidMayer #JonathanZittrain #JonathanTurley
To view or add a comment, sign in
-
What are great, regular, non-work use cases for ChatGPT / Perplexity for you? A couple from me: - Before going to a show: asking AI what I need to know to enjoy it >> avoids spoilers but gives me all the context, things to look for, themes etc. - When visiting a new place: tell me about the history, key locations etc. >> great supplement to whatever I read on signs, or hear in a walking tour etc. - When someone (especially an elder) asks me for recommendations: asking AI on their behalf, by giving it the context of what I know about them >> means i can be "helpful" without having to do annoying work 🙃 What about you?
To view or add a comment, sign in
1,279 followers
Armed Security Professional | Licensed Clinical Medical Assistant
1moGive it the ability to interact with photoshop simultaneously…half ai half photoshop