<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Marc Delorme]]></title><description><![CDATA[Marc Delorme]]></description><link>https://marcdelorme.fr</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 06:19:49 GMT</lastBuildDate><atom:link href="https://marcdelorme.fr/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Introducing toolchains_msvc: a hermetic MSVC toolchain for Bazel]]></title><description><![CDATA[I created toolchains_msvc, a Bazel module that enables users to fetch MSVC and define rules_cc toolchains for building native Windows applications targeting the MSVC ABI — in other words, standard Win]]></description><link>https://marcdelorme.fr/introducing-toolchains-msvc-a-hermetic-msvc-toolchain-for-bazel</link><guid isPermaLink="true">https://marcdelorme.fr/introducing-toolchains-msvc-a-hermetic-msvc-toolchain-for-bazel</guid><category><![CDATA[bazel]]></category><category><![CDATA[build]]></category><category><![CDATA[buildsystem]]></category><category><![CDATA[build system]]></category><category><![CDATA[C++]]></category><category><![CDATA[Reproducible Builds]]></category><category><![CDATA[Windows]]></category><category><![CDATA[Game Development]]></category><category><![CDATA[Games]]></category><category><![CDATA[game]]></category><dc:creator><![CDATA[Marc Delorme]]></dc:creator><pubDate>Tue, 24 Mar 2026 06:57:03 GMT</pubDate><content:encoded><![CDATA[<p>I created <a href="https://github.com/Dragnalith/toolchains_msvc">toolchains_msvc</a>, a Bazel module that enables users to fetch MSVC and define rules_cc toolchains for building native Windows applications targeting the MSVC ABI — in other words, standard Windows binaries.</p>
<h2>Motivation</h2>
<p>I come from the video game industry, where I believe Bazel could be a great fit — complex builds, cross-compilation, custom tooling for shaders and assets. But game dev is a Windows-first ecosystem; even cross-compiling for consoles is done from Windows. Improving that story is a big part of what motivated this project. There's also no standard build system in the industry: Unreal Engine uses Unreal Build Tool, Unity uses Bee, Godot uses SCons — and those are just the public ones; I'm aware of more I can't discuss here. Each reinvented the wheel because no ready solution existed. Today I believe Bazel's core is capable enough to satisfy the same needs — and with solid Windows support, the barrier to entry for new players would be lower: no need to develop your own build system.</p>
<p>The Bazel ecosystem also lacks a Windows toolchain that fully unlocks what makes Bazel great — and I believe that's a friction point for adoption. Bazel adoption isn't limited by how hard it is to use, but by how much configuration it takes to make it work with your ecosystem. Bzlmod addresses that by enabling ready-made solutions for mainstream languages and platforms — but Windows is missing one. A Bzlmod module where you can <code>git clone &amp;&amp; bazel build</code>, up and running in seconds with no extra setup, lowers the bar for people evaluating Bazel on Windows and serves as a modern reference for those who need to customize it for their own ecosystem. At least, I wished it existed — so I set out to build one.</p>
<h2>Design Goals</h2>
<p>Going in, I had four goals:</p>
<ul>
<li><p>Native to the Windows ecosystem: MSVC ABI, fully compatible with the standard Windows developer experience.</p>
</li>
<li><p>Idiomatic Bazel: modern rules-based architecture, platform constraints, build settings, and preferring Bazel's feature mechanism over raw flags.</p>
</li>
<li><p>Full Bazel capabilities without compromise: hermetic through Bzlmod-based installation, reproducible builds (enabling a shared build cache), and remote execution support.</p>
</li>
<li><p>Customizable: choose your compiler frontend (cl.exe, clang-cl.exe, or clang.exe) and linker (link.exe or lld-link.exe), customize flags per compilation mode (dbg, fastbuild, opt), with defaults matching Visual Studio settings.</p>
</li>
</ul>
<h2>Fetching a Proprietary Tool: MSVC</h2>
<p>Targeting the MSVC ABI requires more than a compiler. Even when using Clang, you still need MSVC headers, libraries, and the Windows SDK — the equivalent of a sysroot for Windows. This dependency is mandatory.</p>
<p>MSVC is not open source. It is distributed by Microsoft under a license that prohibits repackaging and redistribution. Any Bzlmod module integrating MSVC must therefore acquire it through official Microsoft channels.</p>
<p>The license also explicitly states:</p>
<blockquote>
<p>you may not: work around any technical limitations in the software;</p>
</blockquote>
<p>Beyond the legal reading, there is a practical principle here: the Microsoft installer has always required the user to acknowledge the license before installation. A tool that automates this process should not silently bypass that step — users need to be aware of what they are agreeing to.</p>
<p><code>toolchains_msvc</code> addresses this as follows:</p>
<ul>
<li><p>It fetches Visual Studio Build Tools from the official Microsoft distribution channel.</p>
</li>
<li><p>It programmatically filters packages to ensure only those belonging to Visual Studio Build Tools can be installed.</p>
</li>
<li><p>It refuses to proceed unless the user has explicitly agreed to the license. That agreement is made by setting a specific environment variable to the value documented in the module; if they have not, it prints the license URL and the variable name so they can do so deliberately.</p>
</li>
</ul>
<p>The check only triggers when Bazel actually fetches MSVC. A project that lists <code>toolchains_msvc</code> as a dependency but does not run any build action using MSVC will never fetch it and will never be prompted to agree to the license.</p>
<p>This approach was inspired by <a href="https://github.com/tgbender/portablemsvc">portablemsvc</a> and <a href="https://github.com/Data-Oriented-House/PortableBuildTools">PortableBuildTools</a>.</p>
<h2>Toolchain Definition</h2>
<p>Toolchains are defined via a <em>toolchain set</em>. A toolchain set produces a set of toolchains from the cross-product of: host platform, target platform, MSVC version, Windows SDK version, LLVM version, and compiler frontend.</p>
<p>Per toolchain set, you can configure default flags and features, as well as flags and features specific to each compilation mode (<code>dbg</code>, <code>fastbuild</code>, <code>opt</code>). You can define as many toolchain sets as needed.</p>
<p>All toolchains from all toolchain sets are declared in <code>@msvc_toolchains//BUILD.bazel</code> and registered with <code>register_toolchains("@msvc_toolchains//:all")</code>. Bazel selects the right toolchain using its standard selection mechanism based on platform constraints and <code>config_setting</code> constraints.</p>
<p>The active toolchain can be controlled with the following build settings:</p>
<ul>
<li><p><code>@msvc_toolchains//msvc=&lt;version&gt;</code></p>
</li>
<li><p><code>@msvc_toolchains//winsdk=&lt;version&gt;</code></p>
</li>
<li><p><code>@msvc_toolchains//llvm=&lt;version&gt;</code></p>
</li>
<li><p><code>@msvc_toolchains//compiler=[msvc-cl|clang-cl|clang]</code></p>
</li>
<li><p><code>@msvc_toolchains//toolchain_set=&lt;toolchain_set_name&gt;</code></p>
</li>
</ul>
<p>All settings have defaults, so <code>bazel build //my_target</code> works with no extra configuration.</p>
<h2>Fine-grained Customization</h2>
<p>A toolchain that works out of the box is valuable only if it can also be customized without forking.</p>
<p>You can select any officially available version of MSVC and Windows SDK, and any version of LLVM that has a published SHA-256 digest.</p>
<p>You customize the toolchain by supplying default compile and link flag lists (per compiler and per compilation mode: <code>dbg</code>, <code>fastbuild</code>, <code>opt</code>). You can replace the built-in defaults entirely or layer additions on top of them.</p>
<p>A few behaviors are not “free-form flags” in those lists: they are implemented as standard Bazel <a href="https://bazel.build/docs/cc-toolchain-config-reference#features">features</a> so <code>cc_library</code>, <code>cc_binary</code>, and feature toggles stay consistent and work out of the box — for example <code>generate_debug_symbols</code>, <code>treat_warnings_as_errors</code>, <code>static_runtime</code>, <code>debug_runtime</code>, and LTO (<code>thinlto</code>, <code>fulllto</code>). For those, you enable or disable the feature (per target or globally), rather than duplicating the same MSVC switches in toolchain defaults. Everything else you care to pass — typical examples include <code>/Od</code>, <code>/O2</code>, <code>/W3</code>, <code>/Zc:__cplusplus</code> — remains ordinary toolchain flag customization.</p>
<h2>Reproducibility and Hermeticity</h2>
<p>Reproducibility unlocks shared build caches — eliminating "works on my machine" and making every build verifiable. It requires two things: hermetic toolchain acquisition and deterministic compiler output.</p>
<p>For hermetic acquisition, <code>toolchains_msvc</code> fetches MSVC, Windows SDK, and optionally LLVM from their official distribution channels via Bzlmod repository rules. As an optional hardening step, you can record the SHA-256 of each package in a lock file; the repository rule will then fail if the downloaded package does not match.</p>
<p>For deterministic output, the toolchain passes flags to remove timestamps and absolute paths from build outputs.</p>
<p>Reproducibility is validated in GitHub Actions: a project and its copy are built independently, then the hashes of all source inputs and build outputs are compared to confirm they are identical.</p>
<p>For a concrete example, see <a href="https://github.com/Dragnalith/toolchains_msvc_example">toolchains_msvc_example</a>: a small DirectX 12 GUI application built with <code>toolchains_msvc</code>. The repo includes <code>sha256_manifest.py</code>, which prints SHA-256 digests for build artifacts such as <code>.obj</code>, <code>.lib</code>, <code>.pdb</code>, and <code>.exe</code>. Run it after a local build and compare the output to the same step in GitHub Actions to confirm your artifacts match.</p>
<p>One caveat: <code>link.exe</code> cannot produce deterministic PDB files — its internal stream IDs are not stable across executions. <code>lld-link.exe</code> (from LLVM) does not have this limitation. By default, <code>cl.exe</code>-based toolchains use <code>link.exe</code>, while <code>clang-cl.exe</code>- and <code>clang.exe</code>-based toolchains use <code>lld-link.exe</code>. You can force <code>cl.exe</code>-based toolchains to use <code>lld-link.exe</code> via the <code>cl_with_lld_version</code> option in your toolchain set definition — at the cost of requiring LLVM to be fetched even when compiling with <code>cl.exe</code>.</p>
<h2>600MB Saved with On-demand System Libraries</h2>
<p>In a typical toolchain, system libraries are listed as dependencies of the <code>cc_tool</code> definition for the linker. On Windows, MSVC and Windows SDK libraries together weigh around 600 MB — compared to roughly 60 MB for executables and 120 MB for headers. For remote execution, this means 75% of what gets uploaded is system libraries, even though most targets only need two or three of them (e.g. <code>kernel32.lib</code>, <code>user32.lib</code>).</p>
<p>In <code>toolchains_msvc</code>, system libraries are not dependencies of <code>cc_tool</code>. They are regular <code>cc_import</code> rules. To use <code>kernel32.lib</code>, you declare a dependency on <code>@msvc_toolchains//lib:kernel32.lib</code>, which is an alias that resolves to the correct library for the Windows SDK version of the selected toolchain.</p>
<h2>Conclusion</h2>
<p>I built this project for myself first. It was my own evaluation of Bazel — I wanted to know if it was possible, and I am the first user of the result. I did this on my personal time, with the goal of building enough knowledge and confidence to invest in Bazel professionally. That goal is now confirmed: I will start transitioning my work projects to Bazel, which involves making it work with game consoles. That part will likely stay private.</p>
<p>The toolchain has not been used in production yet. Some design decisions are probably naive, and I expect bugs to surface once real projects start using it. The CI covers what it can, but automated tests cannot replace real-world usage. I will maintain and improve it as issues emerge — but that work depends on user feedback.</p>
<p>I am curious to see if this resonates. It could eventually become a BCR module. This is a modest contribution, and mostly a way to find out whether I am the only one with this need, or whether it can create traction and opportunity from there.</p>
<p>I am not attached to this repository as the final artifact. If the right outcome is a rename, a rearchitecture, or a different canonical module that learns from this one, I am fine with that. What I care about is that hermetic MSVC toolchains for Bazel exist and improve in the ecosystem — not that this particular project stays the one.</p>
<p>The natural next step would be <code>rules_visualstudio</code>. On Windows, the default debugger is Visual Studio, and a tight debugging workflow matters for adoption. Using Bazel's aspect mechanism, it should be possible to generate <code>.sln</code> and <code>.vcxproj</code> files that delegate the actual build to Bazel — a pattern already proven by Unreal Engine 5 with UnrealBuildTool. Not a big problem technically, but still some work.</p>
]]></content:encoded></item><item><title><![CDATA[Observing LLM request from Claude Code and Cursor using LiteLLM as a proxy]]></title><description><![CDATA[Why Inspect LLM Requests
This project started with a simple question: “Does Claude Code load CLAUDE.md as a system prompt or a user prompt?” Online answers hinted it was added as a user prompt, but I wanted proof—not assumptions.
With agent-based cod...]]></description><link>https://marcdelorme.fr/observing-llm-request-from-claude-code-and-cursor-using-litellm-as-a-proxy</link><guid isPermaLink="true">https://marcdelorme.fr/observing-llm-request-from-claude-code-and-cursor-using-litellm-as-a-proxy</guid><category><![CDATA[cursor-proxy]]></category><category><![CDATA[llm-proxy]]></category><category><![CDATA[claude-code]]></category><category><![CDATA[cursor]]></category><category><![CDATA[cursor ai]]></category><category><![CDATA[cursor IDE]]></category><category><![CDATA[claude code proxy]]></category><category><![CDATA[llm]]></category><category><![CDATA[ai agents]]></category><category><![CDATA[#Coding Assistant]]></category><category><![CDATA[AI Coding Agent]]></category><category><![CDATA[ai coding agents]]></category><category><![CDATA[AI]]></category><category><![CDATA[cline]]></category><category><![CDATA[Roo Code]]></category><dc:creator><![CDATA[Marc Delorme]]></dc:creator><pubDate>Tue, 09 Dec 2025 03:07:28 GMT</pubDate><content:encoded><![CDATA[<h1 id="heading-why-inspect-llm-requests">Why Inspect LLM Requests</h1>
<p>This project started with a simple question: <em>“Does Claude Code load</em> <a target="_blank" href="http://CLAUDE.md"><code>CLAUDE.md</code></a> as a system prompt or a user prompt?” Online answers hinted it was added as a user prompt, but I wanted proof—not assumptions.</p>
<p>With agent-based coding tools, context really matters. The system prompt, supplemental context, and the way the application wraps messages can significantly affect how the model behaves. Understanding how an app structures requests is one of the most reliable ways to diagnose odd behavior or improve prompt design.</p>
<p>The most practical way to inspect those requests is to sit between the app and the LLM provider. A proxy lets you log both requests and responses in a controlled way.</p>
<p>This article explains how I built such a proxy and how you can use it to observe the request structure used by Claude Code and Cursor.</p>
<h1 id="heading-requirements-for-the-proxy">Requirements for the Proxy</h1>
<p>Claude Code allows configuring a custom endpoint via <code>ANTHROPIC_BASE_URL</code> and <code>ANTHROPIC_API_KEY</code>. The endpoint must implement the Anthropic <strong>Messages API</strong>.</p>
<p>Cursor works in a similar way: you can set an OpenAI API key and override the Base URL. However, it only supports the <strong>OpenAI Chat Completions API</strong> and does not allow localhost endpoints—requests must go through Cursor’s servers and point to a public URL.</p>
<p>So the proxy needs to meet three requirements:</p>
<ul>
<li><p>Support the Anthropic Messages API</p>
</li>
<li><p>Support the OpenAI Chat Completions API</p>
</li>
<li><p>Be hosted online (not local)</p>
</li>
</ul>
<p>LiteLLM was an ideal fit. It supports multiple providers and exposes both API formats with minimal configuration.</p>
<p>For hosting, I chose Railway because it is easy to deploy for such simple stateless services. And their free tier is good enough for such a small experiment.</p>
<p>As for the backend LLM provider, I used OpenRouter because it supports many models through a single interface—though LiteLLM can work with practically any provider.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765116434719/ea4a2b89-339d-49d2-bff1-bacb4dcbaa3c.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-building-and-deploying-the-proxy">Building and Deploying the Proxy</h1>
<p>If you only want to run the proxy, you can find the repository with deployment instructions here: <a target="_blank" href="https://github.com/Dragnalith/llm-proxy-logger">https://github.com/Dragnalith/llm-proxy-logger</a>.</p>
<p>The implementation consists of three files:</p>
<ul>
<li><p><code>config.yaml</code></p>
</li>
<li><p><code>logger.py</code></p>
</li>
<li><p><code>run_proxy.py</code></p>
</li>
</ul>
<h2 id="heading-configuration-file-configyaml">Configuration File: <code>config.yaml</code></h2>
<p>Before the proxy can forward requests, LiteLLM needs to know which incoming model names map to which real model providers. The <code>config.yaml</code> file defines that routing, the API credentials, and the callback used to log requests.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">model_list:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">model_name:</span> <span class="hljs-string">claude-sonnet-4-5-20250929</span> <span class="hljs-comment"># For Claude Code</span>
    <span class="hljs-attr">litellm_params:</span>
      <span class="hljs-attr">model:</span> <span class="hljs-string">openrouter/anthropic/claude-sonnet-4.5</span>
      <span class="hljs-attr">api_key:</span> <span class="hljs-string">os.environ/OPENAI_API_KEY</span>
      <span class="hljs-attr">base_url:</span> <span class="hljs-string">https://openrouter.ai/api/v1</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">model_name:</span> <span class="hljs-string">cls-s45</span> <span class="hljs-comment"># For Cursor</span>
    <span class="hljs-attr">litellm_params:</span>
      <span class="hljs-attr">model:</span> <span class="hljs-string">openrouter/anthropic/claude-sonnet-4.5</span>
      <span class="hljs-attr">api_key:</span> <span class="hljs-string">os.environ/OPENAI_API_KEY</span>
      <span class="hljs-attr">base_url:</span> <span class="hljs-string">https://openrouter.ai/api/v1</span>

<span class="hljs-attr">litellm_settings:</span>
  <span class="hljs-attr">success_callback:</span> [<span class="hljs-string">"logger.log_request"</span>]

<span class="hljs-attr">general_settings:</span>
  <span class="hljs-attr">master_key:</span> <span class="hljs-string">os.environ/LITELLM_MASTER_KEY</span>
</code></pre>
<p>The most important part of this configuration is the <code>model_list</code>. Each entry represents a model name that external tools (Claude CLI or Cursor) will reference. LiteLLM uses this name to determine which real LLM and provider URL to call. The value in <code>model_name</code> must exactly match what the client application will send — otherwise requests will fail silently.</p>
<p>Claude Code requires using the timestamped model identifier (e.g., <code>claude-sonnet-4-5-20250929</code>) rather than a generic alias like <code>claude-sonnet-4-5</code>. Cursor, meanwhile, needs a custom model entry that mirrors the name defined in its settings. During testing, model names containing <code>"claude"</code> caused Cursor to switch its protocol behavior, so a neutral name such as <code>cld-s45</code> is safer.</p>
<p>Next, the <code>success_callback</code> points to the function in <code>logger.py</code> that receives metadata about each successful LLM request. This is what enables logging.</p>
<p>Finally, <code>master_key</code> defines authentication for the proxy. Without it, anyone with the URL could send requests and consume your upstream API quota. Even though this setup is experimental, enabling authentication prevents accidental exposure of valid credentials.</p>
<h2 id="heading-logging-requests-loggerpy">Logging Requests: <code>logger.py</code></h2>
<p>Once the proxy can forward requests, the next step is capturing what those requests look like. The <code>logger.py</code> file defines the callback function referenced in the configuration. LiteLLM invokes this function after each successful request, passing details such as the input messages, the model response, and timing metadata.</p>
<pre><code class="lang-python"><span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">log_request</span>(<span class="hljs-params">kwargs, response_obj, start_time, end_time</span>):</span>
    <span class="hljs-keyword">with</span> open(<span class="hljs-string">'llm-proxy-logger.log'</span>, <span class="hljs-string">'a'</span>) <span class="hljs-keyword">as</span> f:
            log_entry = {
                <span class="hljs-string">'timestamp'</span>: datetime.now().isoformat(),
                <span class="hljs-string">'messages'</span>: kwargs.get(<span class="hljs-string">'input'</span>)
            }
            f.write(json.dumps(log_entry, indent=<span class="hljs-number">2</span>) + <span class="hljs-string">',\n'</span>)
</code></pre>
<p>The callback determines what gets written to the log. I chose a JSON format because it's readable and easy to parse later, but you can tailor the structure to your needs.</p>
<h2 id="heading-running-the-server-runproxypy">Running the Server: <code>run_proxy.py</code></h2>
<p>With configuration and logging in place, the final step is running the proxy. LiteLLM already provides a command-line interface to launch a proxy directly from a configuration file, and in many cases that would be sufficient. However, creating a custom runner offers additional flexibility—for example, exposing helper endpoints or controlling initialization behavior.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> uvicorn
<span class="hljs-keyword">import</span> asyncio
<span class="hljs-keyword">from</span> litellm.proxy.proxy_server <span class="hljs-keyword">import</span> app, initialize

<span class="hljs-meta">@app.get("/logs")</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">serve_logs</span>():</span>
    <span class="hljs-keyword">return</span> FileResponse(<span class="hljs-string">'llm-proxy-logger.log'</span>)

<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">startup_event</span>():</span>
    <span class="hljs-keyword">await</span> initialize(config=<span class="hljs-string">'config.yaml'</span>)

<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">"__main__"</span>:
    asyncio.run(startup_event())
    uvicorn.run(app, host=<span class="hljs-string">"0.0.0.0"</span>, port=<span class="hljs-number">4000</span>, lifespan=<span class="hljs-string">"on"</span>)
</code></pre>
<p>In this version, the custom runner adds a <code>/logs</code> endpoint that serves the log file via HTTP. This is particularly useful when deploying to platforms like Railway, where filesystem access is not exposed and log files cannot be retrieved through the UI.</p>
<p>The script initializes LiteLLM using the shared configuration file, then starts a lightweight FastAPI server powered by Uvicorn. Because the proxy is stateless, this setup scales easily and remains inexpensive to run.</p>
<h2 id="heading-deploy-to-railway">Deploy to Railway</h2>
<p>Once the proxy files are ready, you can deploy them to Railway. Create a Railway service, set the required environment variables (<code>OPENAI_API_KEY</code>, <code>LITELLM_MASTER_KEY</code> and <code>PORT=4000</code>), then deploy.</p>
<pre><code class="lang-plaintext">railway login
railway init
railway add
railway domain
railway up
</code></pre>
<p>The <code>railway domain</code> command assigns a public URL similar to <code>https://&lt;your-service-name&gt;.up.railway.app</code> which becomes your proxy endpoint.</p>
<h1 id="heading-connecting-claude-code-to-the-proxy">Connecting Claude Code to the Proxy</h1>
<p>Before using Claude Code, set the following environment variables:</p>
<pre><code class="lang-bash">ANTHROPIC_BASE_URL=https://&lt;your-service-name&gt;.up.railway.app
ANTHROPIC_API_KEY=&lt;your-proxy-master-key&gt;
</code></pre>
<p>After that, every Claude request will flow through your proxy.</p>
<h1 id="heading-connecting-cursor-to-the-proxy">Connecting Cursor to the Proxy</h1>
<p>In the <em>Models</em> section of Cursor settings:</p>
<ol>
<li><p>Enable <strong>OpenAI API Key</strong> and paste your proxy master key.</p>
</li>
<li><p>Enable <strong>Override OpenAI Base URL</strong> and set: <code>https://&lt;your-service-name&gt;.up.railway.app</code>.</p>
</li>
<li><p>Add a new custom model name matching one from your <code>config.yaml</code>. (Example: <code>cld-s45</code>)</p>
</li>
</ol>
<p>Cursor will now send requests through the proxy when you select your custom model.</p>
<h1 id="heading-insights-from-the-logged-requests">Insights from the Logged Requests</h1>
<p>Once connected, you can inspect traffic by visiting: <code>https://&lt;your-service-name&gt;.up.railway.app/logs</code></p>
<p>After logging requests from Claude Code and Cursor, I confirmed:</p>
<ul>
<li><p><code>CLAUDE.md</code> is added to the first message of the conversation and not the system prompt.</p>
</li>
<li><p>Cursor injects rules content in the first message using <code>&lt;rules&gt;&lt;/rules&gt;</code> tags.</p>
</li>
<li><p>The user prompt appears in a second message wrapped in <code>&lt;user_query&gt;&lt;/user_query&gt;</code>.</p>
</li>
<li><p>Before that prompt, in the second message, Cursor often includes <code>&lt;additional_data&gt;&lt;/additional_data&gt;</code> describing recent file state or edits.</p>
</li>
<li><p>Contrary to what I thought, Cursor agent modes like <em>Plan</em> or <em>Ask</em> mode are implemented using a <code>&lt;system_reminder&gt;&lt;/system_reminder&gt;</code> block that modifies instructions after the user query. And not in the system.</p>
</li>
<li><p>System prompts differ only slightly between modes—mostly in tool guidance or small behavioral nudges.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[My ideal Japanese teacher in the age of ChatGPT]]></title><description><![CDATA[Last week I asked my Japanese teacher to explain the meaning of どーせ to me. After 15 minutes, the only thing I could grasp was that it is a word you use when you feel negative or pessimistic. I had already understood that after the first minute of the...]]></description><link>https://marcdelorme.fr/my-ideal-japanese-teacher-in-the-age-of-chatgpt</link><guid isPermaLink="true">https://marcdelorme.fr/my-ideal-japanese-teacher-in-the-age-of-chatgpt</guid><category><![CDATA[Japanese,]]></category><category><![CDATA[Language Learning]]></category><category><![CDATA[language]]></category><category><![CDATA[languages]]></category><category><![CDATA[chatgpt]]></category><category><![CDATA[Chat-GPT]]></category><category><![CDATA[llm]]></category><category><![CDATA[teachers]]></category><category><![CDATA[teaching]]></category><category><![CDATA[Kanji]]></category><category><![CDATA[asia]]></category><category><![CDATA[teach]]></category><dc:creator><![CDATA[Marc Delorme]]></dc:creator><pubDate>Sat, 03 Aug 2024 05:53:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/s9CC2SKySJM/upload/79a3cf9d55875816d4f07e8a3bc1bc93.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last week I asked my Japanese teacher to explain the meaning of どーせ to me. After 15 minutes, the only thing I could grasp was that it is a word you use when you feel negative or pessimistic. I had already understood that after the first minute of the explanation; the remaining 14 minutes did not bring me any more knowledge. It was frustrating for both my teacher and me.</p>
<p>Later I asked ChatGPT. It explained that どーせ is equivalent to "anyway" (in English) or "de toute façon" (in French), used to express a sense of resignation or inevitability. Getting this explanation only took me 30 seconds and it was crystal clear.</p>
<h2 id="heading-teacher-dont-need-to-explain-japanese-anymore">Teacher don't need to explain Japanese anymore</h2>
<p>With the rise of the internet and especially technology like ChatGPT, I have started to get the impression that teachers don't need to explain Japanese anymore.</p>
<p>In the past, dictionaries and textbooks were used to replace teacher explanations. But they often failed to convey nuances, context, and culture. You needed someone to give you further adapted explanations; otherwise, you had to practice the new knowledge a lot to "naturally get it" after some unpredictable time.</p>
<p>Today you can use ChatGPT to get detailed and customized explanations. Especially when you reach a certain level, where your questions become very precise and technical, it becomes challenging for the teacher to give a satisfying explanation.</p>
<p>Indeed, your Japanese teacher is likely Japanese and native in Japanese. It is hard for native speakers to explain detailed points of the language because it just feels natural to them. Even if it can be explained, your Japanese is maybe not enough to understand the explanation with precision. Those questions are easier to answer by comparing them to a language you already know, but this requires your teacher to be equally proficient in that language, which is often not the case.</p>
<p>ChatGPT does not have this problem. It can speak any language; it can explain the nuances between words and generate the same example in any language so you can make comparisons yourself.</p>
<p>Why would a teacher bother spending time on explanations when they can simply forward it to ChatGPT?</p>
<p>Make no mistake, I am not saying teachers can be replaced by ChatGPT. I am saying the value of teachers should no longer be in technical explanations.</p>
<h2 id="heading-i-wish-for-teachers-to-decide-and-maintain-my-learning-path">I wish for teachers to decide and maintain my learning path</h2>
<h3 id="heading-i-wish-for-teachers-to-give-me-a-personalized-learning-methodology">I wish for teachers to give me a personalized learning methodology</h3>
<p>Accessing detailed technical explanations is not all you need to learn a language; you also need to retain this knowledge.</p>
<p>Most of my life, when teachers gave me homework, they often asked me to <em>review</em> the lesson or <em>learn</em> vocabulary. These instructions are vague. Beginners may think <em>review</em> and <em>learn</em> mean <em>read again</em>, but experienced learners know that repetition alone doesn't help much. Reading a word 50 times won't make it stick more than after the first few repetitions.</p>
<p>To learn new knowledge, you need to practice it in various ways and spread those exercises over time. I believe there are many methods, but not all fit everyone the same.</p>
<p>Creating a methodology and learning routine is more challenging than it seems. You need to be creative, and the numerous options can be overwhelming. Most of us end up just reading again.</p>
<p>In other words, being told "to learn [something]" is too vague. I need specific tasks I can act on without overthinking. Researching and deciding how to study Japanese is a burden I want to pass to my teachers, so I can focus on the actual work.</p>
<h3 id="heading-i-wish-for-teachers-to-keep-me-focus-on-what-matters">I wish for teachers to keep me focus on what matters</h3>
<p>I often overwhelm myself with too many questions. Recently, while studying the manga <a target="_blank" href="https://en.wikipedia.org/wiki/Slam_Dunk_(manga)">Slam Dunk</a>, I came up with 65 questions after reading the first chapter, not including vocabulary I had already looked up.</p>
<p>I want to understand everything in depth, including nuances, subtext, and the unspoken. If fluent people can understand it, I want to understand it too.</p>
<p>As a consequence, I always have a lot of questions and the fear of missing something makes me wants to clear all of them right now. However, not all questions are equally important, and tackling them all at once isn't efficient. It can be discouraging and counterproductive.</p>
<p>I wish teachers to help prioritize my questions and focus on what truly matters. I want them to reassure me that I am not missing anything important.</p>
<h3 id="heading-i-wish-for-teachers-to-challenge-me-beyond-my-comfort-zone">I wish for teachers to challenge me beyond my comfort zone</h3>
<p>Learning alone can often lead to staying within your comfort zone. You might get used to certain materials, like watching TV dramas, and consume more and more of them. While this feels rewarding and you get better at it, you might end up only doing that. Over time, without noticing, you get stuck.</p>
<p>For me, it was using a flashcard system called <a target="_blank" href="https://apps.ankiweb.net/">Anki</a>. Every day, I reviewed vocabulary and only that. Although I learned many new words and felt like I was progressing, my overall speaking skills didn't improve much because I wasn't practicing other skills like listening or reading.</p>
<p>I realized this problem in January 2024 when I restarted taking Japanese lessons. My teacher asked me to read articles every week. In the past, I quickly gave up on reading practice because there were too many words I didn't know, and it felt pointless to spend most of the time looking them up in a dictionary. I thought it was too early for me and that I should focus on vocabulary first.</p>
<p>But this time I had no choice, so I forced myself to read. I had to look up all the words in a dictionary. Yes, it took time, but actually less than I thought. Moreover, it was not as useless as I had convinced myself. Once you look up all the unknown vocabulary, you do feel you can read, and it is very rewarding. It is also better to learn vocabulary in context rather than just through flashcards.</p>
<p>Without my teacher pushing me to try new things, I would have remained stuck. They should keep me focused initially but challenge me beyond my comfort zone when it's time. By introducing new exercises or materials at the right moment, they will ensure my growth and improvement continue at the same pace.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>My ideal teacher takes responsibility for my success. They will do everything for me except what only I can do.</p>
<p>Indeed, if my brain is not put to work, I don't learn. I need to read, write, research, and recall. It's part of the process. This only I can do.</p>
<p>For everything else, I want to pay someone to handle it. In particular, I want to delegate the burden of deciding what to do. I don't know how to learn Japanese efficiently, and I don't care about learning that skill.</p>
]]></content:encoded></item></channel></rss>