• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2024

help-circle






  • No you can’t if you don’t know the libraries

    IDE.

    Python is entirely dependent on what libraries you include

    ??

    If you don’t know what you need you can’t do shit.

    IDE.

    The problems you propose in your comment are not only greatly exaggerated but already been solved for decades using conventional tools AND apply to literally all languages, having nothing at all to do with python. Good try! My statement holds true.

    Maybe your assumption is that you’re in a cave writing code in pencil on paper, but that’s not a typical working condition. If you have access to Claude to use as a crutch, then you have access to search for an available python library and read some “Getting Started” paragraphs.

    Seriously, if the only real value that AI provides is “you don’t need to know the libraries you’re using” 💀 that’s not quite as strong of an argument as you think it is lmaooo “knowing the libraries” isn’t exactly an existing challenge or software engineering problem that people struggle with…










  • Willison has never claimed to be an expert in the field of machine learning, but you should give more credence to his opinions.

    Yeah, I would if he didn’t demonstrate such blatant misconceptions.

    Willison is a prominent figure in the web-development scene

    🤦 “They know how to sail a boat so they know how a car engine works”

    Willison never claims or implies this in his article, you just kind of stuff those words in his mouth.

    Reading comprehension. I never implied that he says anything about censorship. It is a correct and valid example that shows how his understanding is wrong about how system prompts work. “Define censorship” is not the argument you think it is lol. Okay though, I’ll define the “censorship” I’m talking about as refusal behavior that is introduced during RLHF and DPO alignment, and no the system prompt will not change this behavior.

    EDIT: saw your edit about him publishing tools that make using an LLM easier. Yeahhhh lol writing python libraries to interface with LLM APIs is not LLM expertise, that’s still just using LLMs but programatically. See analogy about being a mechanic vs a good driver.



  • If the system prompt doesn’t tell it to search for Elon’s views, why is it doing that?

    My best guess is that Grok “knows” that it is “Grok 4 buit by xAI”, and it knows that Elon Musk owns xAI, so in circumstances where it’s asked for an opinion the reasoning process often decides to see what Elon thinks.

    Yeah, this blogger shows a fundamental misunderstanding of how LLMs work or how system prompts work. LLM behavior is not directly controlled by the system prompt the way this person imagines. For example, censorship that is present in the training set will be “baked in” to the model and the system prompt will not affect it, no matter how the LLM is told not to be censored in that way.

    My best guess is that the LLM is interfacing with a tool in order to search through tweets, and the training set that demonstrates how to use the tool contains example searches for Elon Musk’s tweets.