

It’s not local? It’s not private. Period. “End-to-End Encrypted” LOL except they’re the other “end”. That’s just HTTPS.
It’s not local? It’s not private. Period. “End-to-End Encrypted” LOL except they’re the other “end”. That’s just HTTPS.
EA trash, designed to squeeze microtransactions. Enjoy your daily challenges and battle royale
It’s not just media. The number of software engineers I’ve heard talk about “fixing” a “zero day” in a code dependency by updating to a patched version…
Actually, nope! Claiming that you personally didn’t learn with an IDE and that there are make-believe scenarios where one is not available is not actually addressing the argument.
There really aren’t any situations that make any sense at all where an IDE is not available. I’ve worked in literally the most strict and locked down environments in the world, and there is always approved software and tools to use… because duh! Of course there is, silly, work needs to get done. Unless you’re talking about a coding 101 class or something academic and basic. Anyway, that’s totally irrelevant regardless, because its PURE fantasy to have access to something like Claude and not have access to an IDE. So your argument is entirely flawed and invalid.
It’s telling that you’re focused on personal assumptions instead of addressing the argument
No you can’t if you don’t know the libraries
IDE.
Python is entirely dependent on what libraries you include
??
If you don’t know what you need you can’t do shit.
IDE.
The problems you propose in your comment are not only greatly exaggerated but already been solved for decades using conventional tools AND apply to literally all languages, having nothing at all to do with python. Good try! My statement holds true.
Maybe your assumption is that you’re in a cave writing code in pencil on paper, but that’s not a typical working condition. If you have access to Claude to use as a crutch, then you have access to search for an available python library and read some “Getting Started” paragraphs.
Seriously, if the only real value that AI provides is “you don’t need to know the libraries you’re using” 💀 that’s not quite as strong of an argument as you think it is lmaooo “knowing the libraries” isn’t exactly an existing challenge or software engineering problem that people struggle with…
Anyone who already knows another programming language but has never used python in their life can write a simple python app quickly, regardless
Kagi is all in on AI. Its the AI slop version of a search ranking algorithm
Bro watched Qatar gift Trump a $400m jet and host a $1.5m per plate dinner, then was like “corruption in order to appease Trump??? ReDdIt CoNsPiRaCy!!! I must reach for a much less plausible explanation!!!” 🤡
We both know this is the Ellison family killing a show to appease Trump, a show that they also don’t politically agree with, in order to complete the merger that Trump’s administration must approve. In this poltical climate, you’re just playing dumb.
Right??? I don’t understand buying it prepackaged when it is so much cheaper and takes 2 second to put some popcorn in a bag yourself.
More like Israel used him to blackmail American politicians and control America…
“Vote blue no matter who” for me but not for thee
Again, for the third time, that was not really the point either and I’m not interested in dancing around a technical scope defining censorship in this field, at least in this discourse right here and now. It is irrelevant to the topic at hand.
…
Either way, my point is that you are using wishy-washy, ambiguous, catch-all terms such as “censorship” that make your writings here not technically correct, either. What is censorship, in an informatics context? What does that mean? How can it be applied to sets of data? That’s not a concretely defined term if you’re wanting to take the discourse to the level that it seems you are, like it or not.
Lol this you?
if you want to define censorship in this context that way, you’re more than welcome to, but it is a non-standard definition that I am not really sold on the efficacy of. I certainly won’t be using it going forwards.
Lol you’ve got to be trolling.
https://arxiv.org/html/2504.03803v1
I just felt the need to clarify to anyone reading that Willison isn’t a nobody
I didn’t say he’s a nobody. What was that about a “respectable degree of chartiable interpretation of others”? Seems like you’re the one putting words in mouths, here.
If he was writing about django, I’d defer to his expertise.
Willison has never claimed to be an expert in the field of machine learning, but you should give more credence to his opinions.
Yeah, I would if he didn’t demonstrate such blatant misconceptions.
Willison is a prominent figure in the web-development scene
🤦 “They know how to sail a boat so they know how a car engine works”
Willison never claims or implies this in his article, you just kind of stuff those words in his mouth.
Reading comprehension. I never implied that he says anything about censorship. It is a correct and valid example that shows how his understanding is wrong about how system prompts work. “Define censorship” is not the argument you think it is lol. Okay though, I’ll define the “censorship” I’m talking about as refusal behavior that is introduced during RLHF and DPO alignment, and no the system prompt will not change this behavior.
EDIT: saw your edit about him publishing tools that make using an LLM easier. Yeahhhh lol writing python libraries to interface with LLM APIs is not LLM expertise, that’s still just using LLMs but programatically. See analogy about being a mechanic vs a good driver.
Is my comment wrong though? Another possibility is that Grok is given an example of searching for Elon Musk’s tweets when it is presented with the available tool calls. Just because it outputs the system prompt when asked does not mean that we are seeing the full context, or even the real system prompt.
Posting blog guides on how to code with ChatGPT is not expertise on LLMs. It’s like thinking someone is an expert mechanic because they can drive a car well.
If the system prompt doesn’t tell it to search for Elon’s views, why is it doing that?
My best guess is that Grok “knows” that it is “Grok 4 buit by xAI”, and it knows that Elon Musk owns xAI, so in circumstances where it’s asked for an opinion the reasoning process often decides to see what Elon thinks.
Yeah, this blogger shows a fundamental misunderstanding of how LLMs work or how system prompts work. LLM behavior is not directly controlled by the system prompt the way this person imagines. For example, censorship that is present in the training set will be “baked in” to the model and the system prompt will not affect it, no matter how the LLM is told not to be censored in that way.
My best guess is that the LLM is interfacing with a tool in order to search through tweets, and the training set that demonstrates how to use the tool contains example searches for Elon Musk’s tweets.
Wild that you have downvotes when you’re absolutely 100% correct