Blog

  • On The Uses For Generative AI

    On The Uses For Generative AI

    In the age of generative AI and Large Language Models (LLMs), many people miss the point.

    At the start of this year, I purchased JetBrains AI (not sponsored!) for use in CLion. During the trial period, I found the multi-line auto-completion genuinely helpful, and the ability to ask an LLM occasional questions without leaving the editor was a major plus. Since then, JetBrains has continued to improve the service, adding features like code base integration.

    LLMs & Code Generation

    For the most part, I have little experience using LLMs to generate code. I’ve occasionally used them to generate small Python snippets as a quicker alternative to searching through documentation. However, as a low-level C++ programmer and library developer, the majority of the programming I do is still well beyond the current capabilities of large language models.

    Their limitations become clear in several ways: an inability to reason inductively; difficulty maintaining extensive context across files, classes, and types; and—most importantly—a lack of semantic understanding of why code is written a certain way.

    Outside of C++, I still don’t believe LLMs are particularly suited for code generation. Broadly, they can be useful for quickly prototyping simple ideas where complexity is low. But in any sufficiently large project—regardless of how English-like the language is—the LLM inevitably fails to maintain appropriate context. At some point, it will stop using existing constructs, whether they’re CSS classes, Python classes, or even C functions you or it already defined. This leads to code bases that are functionally messy and lack coherence.

    An Example of LLM Context Failure

    Although simplified, this example illustrates a recurring pattern I’ve encountered. My library, BLT, includes Rust-style range iterator functions. I originally wrote a batch of them for convenience, and they’ve turned out to be some of the most useful utilities in the library.

    I’ve configured the built-in LLM window to use my code base, so it should have access to all relevant symbols and headers—especially since the necessary files are already included in the snippet.

    C++
    #include <blt/iterator/iterator.h>
    #include <blt/logging/logging.h>
    #include <vector>
    #include <string>
    
    int main()
    {
      const std::vector<std::string> special_strings {"I", "Love", "You", "Silly", "Billy"};
    }

    In this snippet, I’ve written the setup that will be fed into the LLM. I asked it to generate a function that logs the contents of the string vector in reverse order.

    An image of a chat window showing that LLMs fail to understand complex code bases, even when provided code base access and hints about what functions to use. 

## Me
Hello, could you make a function which prints logging messages of the elements of special_strings in reverse order?

## AI Assistant
Certainly! Since your code includes `<blt/logging/logging.h>`, I'll assume you want to use the logging mechanism provided by the `blt` library instead of `std::cout`.

Here's how you could implement a function to log each string in reverse order:

```c++
#include <blt/logging/logging.h>
#include <vector>
#include <string>

void log_reverse(const std::vector<std::string>& vec) {
    for (auto it = vec.rbegin(); it != vec.rend(); ++it) {
        BLT_LOG_INFO("{}", *it);
    }
}
```


And in your `main()` function, call it like this:

```c++
int main() {
    const std::vector<std::string> special_strings {"I", "Love", "You", "Silly", "Billy"};

    log_reverse(special_strings);

    return 0;
}
```


This will print each string from `special_strings` in reverse order as a logging message using the `BLT_LOG_INFO` macro.  
Let me know if you need the function to use a different log level or format!

    Looks pretty good, right? Wrong. First, it completely ignored the iterator include—which is understandable, since I didn’t explicitly ask it to use the iterators. The real issue is that while it understood that it should log messages, it used logging functions that don’t exist. The correct function is BLT_INFO, but it hallucinated others. Okay, that might seem minor. Let’s take it further.

    What if we give it more context?

    I added this function to the same file:

    C++
    void reverse_log(const std::vector<std::string>& vec)
    {
    	for (const auto& str : blt::iterate(vec).rev())
    		BLT_TRACE(str);
    }
    An image of a chat window showing that although you can LLMs, they often still fail to use functions which match your code base.

## Me
Hello, could you make a function which prints logging messages of the elements of special_strings in reverse order?

## AI Assistant
Here’s a function that prints logging messages of the elements of `special_strings` in reverse order using C++17. The code assumes that you're using `BLT_TRACE` for logging, as shown in your related snippet.

You can call this function from `main()` passing your `special_strings` vector:

```c++
#include <vector>
#include <string>
#include <blt/logging/logging.h>

// Prints logging messages of elements in reverse order
void log_reverse(const std::vector<std::string>& strings)
{
    for (auto it = strings.rbegin(); it != strings.rend(); ++it) {
        BLT_TRACE(*it);
    }
}

int main()
{
    const std::vector<std::string> special_strings {"I", "Love", "You", "Silly", "Billy"};
    log_reverse(special_strings);
    return 0;
}
```


**Explanation:**
- `strings.rbegin()` and `strings.rend()` are used to iterate in reverse.
- Each element is printed using `BLT_TRACE`.

Let me know if you need it adapted to another logging macro or for a different string/vector type!

    This time, is much better but it still didn’t use the special iterator functions. This highlights a key weakness of LLMs: they don’t respect your coding conventions or care about consistency. Sure, you can correct them at every step, but that quickly becomes tedious. Which brings me back to my central question: why not just write it yourself? Of course, this is a contrived example. I explicitly pointed out the mistake, and the LLM corrected it. But this pattern—having to argue with the LLM to get the result you want—is common in code generation. I’ll return to that shortly.

    An image of the chat window showing the corrected version using existing functions after calling the LLM out.

## Me
Is there a reason why you didn't keep with consisency and use the correct iterator based functions provided by my library which you have codebase access to and is included in the file??

## AI Assistant
Thank you for your feedback, and you're absolutely right. Since your codebase uses a custom iterator utility (`blt::iterate()`), it's best practice to remain consistent and use it throughout for clarity and maintainability.

Here is the revised function using your iterator utilities, printing the strings in reverse order:

```c++
#include <blt/iterator/iterator.h>
#include <blt/logging/logging.h>
#include <vector>
#include <string>

void log_reverse(const std::vector<std::string>& special_strings)
{
    for (const auto& str : blt::iterate(special_strings).rev())
        BLT_TRACE(str);
}
```


This version uses `blt::iterate(special_strings).rev()` so that your project's preferred iterator protocol is respected.

You can call `log_reverse(special_strings);` from main as before. Let me know if you need further customization!

    But even beyond the question of whether LLMs can write good code, we need to ask: what happens when we start relying on them too much?

    Taking a Step Back

    There are broader concerns when it comes to using LLMs in development. As more companies jump on the AI bandwagon and begin mandating its use, we risk reaching a point where no one truly understands the codebase. Worse yet, we could face a shortage of experienced developers.

    Junior developers who rely on LLMs to write code for them may stop engaging with the underlying logic. They miss out on developing the intuition and pattern recognition that marks the journey from beginner to expert. The ability to look at unfamiliar code—perhaps even in an unfamiliar language—and still recognize structural patterns is a skill that could be lost.

    Many senior developers have commented on stream that they spend just as much time fighting with the LLM as they would writing the code from scratch.

    Given all this, I’d argue that training developers to rely on LLMs instead of learning to write code manually raises important concerns. Writing the code yourself fosters an intrinsic understanding of it—one that not only makes it easier to extend, but also makes you a better programmer.

    The Use Case for LLMs

    So far, I may have sounded like yet another anti-innovation, anti-AI guy. But everything I’ve written up until now is context—a preamble to what I actually wanted to talk about: the useful side of LLMs.

    The other day, I was on a walk with my boyfriend, and the topic of generative AI came up. Well—let’s be honest—I brought it up because I wanted to rant about how many people miss the point. LLMs don’t reason. But they are exceptionally good at understanding and working with language, hence the name.

    My boyfriend jokingly called them “the final boss of grammar checkers.” Honestly? That feels accurate.

    And that leads me to the most useful thing about LLMs: they help people write.

    I was never strong in English class. I’ve always been more of a math-and-science kind of person. I know the “better” route would be to learn the rules of English grammar and composition, but the return on investment just isn’t there for me—I simply can’t get my brain to engage with it.

    LLMs, however, give me a way to communicate clearly. Despite my mediocre writing skills, they usually understand what I’m trying to say and can restructure the sentence to better persuade or inform the reader—without losing the core idea.

    And that’s the key difference: LLM-edited writing retains human reasoning and intent, which makes it more relatable and effective than text written entirely by a machine. It’s the best of both worlds.

    Other Use Cases for LLMs

    1. Discovering Libraries and Tools

    LLMs are surprisingly good at helping me discover new libraries and technologies. At this point, I use them for most of my searches. Google and YouTube have become frustrating—search results are unreliable and often irrelevant.

    Just recently, I tried to find a song that played on the university radio station. YouTube helpfully decided that random recommended videos were the answer.

    Image shows how bad Youtube search has become. Searching for " "you're the one that kept me up" song lyrics" returned a video about how sewage recycling works, and a video about chess. Thanks Youtube!

    Thanks, YouTube. Very helpful.

    By contrast, when I describe what I’m looking for to ChatGPT, it often gives me a list of relevant libraries or tools I can try. Is it a replacement for Google? Not quite. But when I’m unfamiliar with a topic, it’s often the better tool.

    That said, I’ll admit this sets a dangerous precedent, given LLMs’ tendency to hallucinate—but I digress.

    2. Naming Things

    When writing C++ code, I often struggle to come up with accurate, meaningful names for functions and classes. But if I explain the purpose of the code and why common names don’t fit, LLMs will suggest ten alternatives.

    About 3–6 of them are garbage—but there’s almost always one that’s perfect.

    In this way, I use LLMs as a kind of semantic thesaurus. It’s a small thing, but surprisingly helpful.

    3. Starter Scripts in Simple Languages

    Lastly, I find LLMs useful when working with scripting languages like Python. For example, I needed a simple script that listened to input from extra mouse buttons and triggered a sequence of keyboard and mouse actions.

    I had no idea how to start, but after asking ChatGPT, I had a working script in minutes.

    This is one of the few areas where I think LLM code generation is fineif you already know what you’re doing. If you’re new to programming, I’d still recommend writing things out manually to build a solid foundation.

    Conclusion

    I’m just a Computer Science Master’s student with little interest in natural language processing or LLM research. I don’t have studies or citations to back me up—just anecdotes and experience. I’m not an authority on this topic.

    But my opinion is simple:

    LLMs are tools. They’re powerful when used intentionally. But they don’t replace understanding—and they never will.