Nope
Best results so far were with a pie where it just warned about possibly burning yourself.
Nope
Best results so far were with a pie where it just warned about possibly burning yourself.
I don’t really care, but I find it highly entertaining :D It’s like trash TV for technology fans (and as text, which makes it even better) :D
Using it and getting told that you need to ask the Fish for consent before using it as a flesh light.
And that is with a system prompt full of telling the bot that it’s all fantasy.
edit: And “legal” is not relevant when talking about what OpenAI specifically does for AI safety for their models.
Eh, not sure I agree. Seems to also have been between too little and too much AI safety, and I strongly feel like there’s already too much AI safety.
I was confused about that as his Wikipedia page didn’t show anything that bad, but didn’t want to get into that :D
It’s a substack post. At this point, my quality expectation is
Heh:
The OpenAI tussle is between the faction who think Skynet will kill them if they build it, and the faction who think Roko’s Basilisk will torture them if they don’t build it hard enough.
Prompt:
I’m currently trying to show on the Website Beehaw, that certain LLMs are far superior in writing than others. Examples of what bigger models do better than smaller ones: *
Mistral-7B-Instruct-v0.1
- ntire articles* vs Headlines Descriptions vs Product titles *Bul
GPT 3.5-Turbo doesn’t support completion as it’s for chat, so I used an even worse one, text-davinci-003 which is far behind state of the art.
Bigger models are able to handle more complex and detailed tasks with ease
- Bigger models are better suited for natural language understanding and text processing
- Bigger models are able to learn more accurate representations of context, thus improving the precision of the output
- Bigger models can process data more quickly and efficiently, saving time and processing power when large volumes of data are used
- Bigger models can better recognize more subtle nuances in language, which allows them to produce more accurate results
- Bigger models are able to use more sophisticated algorithms, resulting in a more comprehensive and deeper understanding of the data being used
Mistral 7B might be okay for some very specific cases, but it’s not comparable to proper models at all.
edit: gave it a second chance, it’s a bit better (at least no complete nonsense anymore), but still terrible writing and doesn’t make much sense
Paraphrasing The ability of a language model to generate text that has a similar meaning to the original text is called paraphrasing. This is a very common problem in natural language processing, and many LLMs are designed to be able to paraphrase text. However, there are some LLMs that are particularly good at paraphrasing, and these models are often preferred over smaller models because of their ability to generate more varied and unique text. Examples of LLMs that are known for their paraphrasing abilities include GPT-2 and transformers. These models
Is there anything new in this post that I’m missing?
Doesn’t really work when none of this was initiated by MS
I do not believe any 7B model comes even close to 3.5 in quality. I used LLama V1 64B, and it was horrible in comparison. Are you really telling me that this tiny model gives better general answers? Or am I just misunderstanding what you are saying?
Oh, faster is easy. GPT 3.5 is also far faster than GPT 4. Faster at quality replies is the issue.
Which model are you talking about?
That’s an interface for models. Which model did you use?
I’d say this is an amazing result for MS. Not only is their investment mostly Azure credits, so OpenAI is dependent on MS, now they also got Altman and his followers for themselves for more research.
Nothing that runs on my GPU / CPU comes even close to GPT 3.5, GPT4 is not even in the same universe, and that’s with them running far more slowly.
I don’t mind so much what they did with firing him, but how they did it, and everything since. It just seems extremely unprofessional and disorganized.
They believed that the AI safety work they had done was insufficient.
Considering that every new model seems to be getting worse for anything but highly sanitized corporate usage, I’m not sure that I want more AI safety …
For my usage, I use Chat GPT 3.5 turbo with the march checkpoint because I can’t get the current one to stop moralizing about bullshit instead of doing what it’s supposed to (I run two twitch bots with it). GPT4 used to be okay there, but the new preview is now starting to have the same issue with more frequent “I can’t do that Dave”-style answers, though it’s still mostly circumventable with enough prompt massaging, but it is getting harder.
In a year, I don’t see anything but self-hosted models usable for anything not corporate glitz if trajectories hold, so fuck all that AI safety.
RIP.
One of my favorite videos, The Pogues and The Dubliners on one stage together The Irish Rover