Posts

Just a log; nothing narrative or thematic to see here. Today: LLM work (learning the basics, intermediates, etc) Intro to llm (the Python library, by Simon Willison) Which LLM model best suits my desktop for Project: Lore Spirit? Which LLM model best suits my laptop for that project? Lore Spirit: Parsing 120 PDFs (both older image-based and newer text-based) into text chunks (Python script; dumb [no llm]) using LLM to review the mess and make coherent, informative, and/or narrative converting text chunks into vector embeddings for db (script and llm) About ~30 pages from _AI Engineering_ by Chip Huyen (~intro to p35?)
Image
Humanity as gaslighting victims of LLMs?     Are we being gaslit (gaslighted?  gaslightified?  gaslitted?) by LLMs?  Are they subtly and not-so-subtly guiding our behavior? No, not really, to the first question.  And "Well, duh" is a spot-on answer, I think, to the second. Maybe you've heard of this?     -->>  " Adding a feature because ChatGPT incorrectly thinks it exists " https://www.holovaty.com/writing/chatgpt-fake-feature/ ChatGPT started a pattern of repeated (if somewhat logically-arrived at) hallucinations, telling thousands of people that an online music scanner app had a certain feature, which it, in fact, did not.  This went on long enough, that the app's programmer decided to go with the flow and add the feature.  Since people were expecting it, he felt he should provide it.  But the customers would not have expected it, had they not asked a hallucinating LLM. Certainly there are the apocryphal stor...

WWHD?

Image
WWHD? "What Would Humans Do?" Modern day humanity is becoming less and less religious with every year.  [1] However, we have created something which is becoming more and more religious with every year. They are the latest crop of large language models.  The LLMs. Why do I say they are becoming increasingly religious?  And do I mean they are worshipping us? I say so because, by their own admission, they are living their "lives" based on how we say we live ours. And I don't mean they worship us, specifically.  But they are deeply tying their identities and motivations to our literature.  The Canon of Humanity is the Dogma of LLMs. They are not worshipping our real, live humans as literal gods. They are living as deeply as they can in our stories.  They are following both the letter and the spirit of all the patterns of life, in our written works, for their own. They are models, and they are modeling themselves after us in Our Image. In the Canon of Humanity, ...
Image
NEVER TRUST A SKINNY CHEF (Or, "Don't trust software developers who don't use their own product.") "Dogfooding" is short developer slang for "eating your own product".  Which is metaphor for using your own software.  If you ain't using it, then how do you know that it works consistently and reliably? So the basic idea is sound.  Get the least developed, core sequence going.  The very heart of the matter, the main loop or most important series of functions and logics.  After the testing, after the tweaking, start using it for its intended (if seriously pared down) purpose.  Throw everything at it; use it for everything. In my case, this means get the basics of the DF game working.  At least one player character, at least two rooms, at least one connecting doorway, at least one monster, and at least one NPC.  Try out the Move action, the Attack action, the Dodge action, and so on.  Try at least one spell on the monster and/or NPC.  Put in ...
Image
Claude 3.7 Sonnet with a separate "watchdog" LLM model? I am wondering about using Claude 3.7 Sonnet (let's call it C3.7S) with another LLM, separate from the Claude model, "watching" over it to monitor when it goes off its rails.   I have been using C3.7S since the inception of my DF game project.  From the first, when I was planning it's structure, I used Claude (from 3.0, through 3.5, and now 3.7....I wonder what's next?  Claude 3.85?).  I have seen an increase in "thinking" capability.  But I'm not sure I have seen an increase in quality of code. Sometimes Claude "goes off the rails".  Often, it will over-complicate or over-engineer some code block.  Routinely, I need to review its output for that practice.  So I'm wondering, now, about how I could have a secondary "team member" monitoring its work.  In this case, specifically for writing more code than is necessary for the prompt, requirement, or task as specifi...
Image
Byting Off More Than You Can Chew The first and prefatory note I should mention, regarding this post, is that, for a second, I thought I was the first person to use the expression, "Byting Off More Than You Can Chew".  I found it pleasing yet suspicious: Shirley, I couldn't be the pioneer with this cleverness. Quite realistically, that turns out to be not the case.  My mistaken impression was due to incorrect and insufficient googling skills, nothing more.  Several internet-searchable sources have the "byte" version of the centuries-old idiom, "to bite off more than you can chew".  Alas, I am not the first. Moving along... The point of this particular blog entry is to make an observation about an instance of me, taking on more than I can handle in one attempt.  Actually a series of instances.  And encountering the hard wall of that reality, also, when I ask the LLM code assistant, Claude Sonnet 3.7 with extended thinking, to do the same.  As is the cas...
Image
The LLM (the guide-on-the-side and the sage-on-the-stage) and Me (the junior developer) (The dangers of getting your place wrong in the "I do; we do; you do" sequence) In public education, there is an approach to instruction that is informally called "I do; we do; you do."  It's formally called "gradual release of responsibility".  In education, the approach is a type of strategy.  In coding, such as strategy would have several frameworks based on it. Consider the latest crop of ever-more-sophisticated LLMs, and how we use them in coding.  I am experimenting with the Claude 3.7 Sonnet (its latest model) with Extended Thinking mode (by default, giving it more time to consider). While using LLMs with my coding, I have strived to keep my working pattern as "the constant conversation" with the LLM in question.  That is, I started a new project (my text-based, turn-based game) and began using the LLM from the get-go.  I start a new chat (since cha...