Posts

Showing posts from 2025

Running Local LLMs for Cursor IDE: A Quick Guide to Ollama Integration

Image
Running Large Language Models (LLMs) locally are becoming increasingly accessible, and integrating them directly into your IDE workflow can dramatically boost productivity if you got some good hardware already. This guide demonstrates how to run LLMs locally using Ollama and connect with Cursor IDE. 1. Setting Up Ollama Ollama local LLM deployment. Install Ollama: Follow the installation instructions for your operating system: https://ollama.com/docs/install   set these envs: export OLLAMA_ORIGINS=*             export OLLAMA_HOST=0.0.0.0:11434   Start the Model: Run ollama serve & to launch the LLM server in the background.  run ollama and pull models: ollama serve & ollama pull llama3.1:8b   Pull a Model: Let's start with llama3.1:8b . Run ollama pull llama3.1:8b . This downloads the model – you can explore other models on the Ollama website ( https://ollama.com/library ). You'll see a list of available mod...

Running LLMs and Stable Diffusion Locally: My Setup & Getting Started

Image
For a while now, I've been using local GPTs – specifically Large Language Models (LLMs) and image generation. The idea of running these powerful tools on my hardware, without relying on cloud services, is incredibly appealing. It’s about control, privacy, and frankly, a bit of tech nerdiness! This post details how I’ve set up my local LLM environment, including integrating Stable Diffusion for some amazing image creation.     My Local Setup Let’s start with the basics. My system is running Ubuntu 25.04 (codename "Plucky") on an Intel-based machine. Here's a breakdown: Operating System: Ubuntu 25.04 Kernel: 6.14.0-33-generic GPU: NVIDIA RTX 500 Ada Generation (Driver Version: 580.95.05, CUDA Version: 13.0) Display Server: Wayland [$XDG_SESSION_TYPE] - Crucially, I switched to Wayland as it significantly impacted GPU utilization. Initially, I experienced limitations with GPU performance compared to X11, but I’ve found optimized settings to mitigate t...

Beyond Toughness: Why Your Car's Brain Needs AEC-Q100 AND ASIL-B

AEC-Q100 vs. ASIL-B: Quality Meets Safety In the era of autonomous driving and electric vehicles, the electronic components that run your car are more sophisticated than ever. But there’s a critical difference between a chip that is merely durable and one that is truly safe . In the automotive world, this distinction is governed by two major standards: AEC-Q100 and ASIL-B (Automotive Safety Integrity Level). For any electronic control unit (ECU)—from the brake system to the battery manager—to be considered safe, it must satisfy both. Here’s a look at what these standards mean, how they differ, and why they must work together. 🚧 Pillar 1: AEC-Q100 — The Quality and Reliability Guardian AEC-Q100 is the standard established by the Automotive Electronics Council (AEC). Its focus is purely on the physical robustness and reliability of integrated circuits (ICs). This is the baseline that ensures the chip is ...

The Actor Model

Image
Just watch what the inventor says.... don't ask me: One of the most simplest explanations and Arbiters, where touring machine screwed up...