Create a REST API for the Microsoft/BitNet B1.58 model and integrate it with an Open WebUI


Now I'm writing/semi-complaining lol. Normally I use models from Ollama, and I happened to come across this person's X (Twitter). They have a Microsoft model that people say can run on CPU just fine, and if you use something like M2, it'll be even faster.

Microsoft just a 1-bit LLM with 2B parameters that can run on CPUs like Apple M2.

BitNet b1.58 2B4T outperforms fp LLaMA 3.2 1B while using only 0.4GB memory versus 2GB and processes tokens 40% faster.

100% opensource. pic.twitter.com/kTeqTs6PHd

— Shubham Saboo (@Saboo_Shubham_) April 18, 2025


And that model is microsoft/BitNet b1.58 2B4T. After seeing the news in APR-2025, I waited to see if anyone would try to implement it in Ollama. I saw people asking about it too, but there was still no update.

I waited for a long time until June-2025 and still nothing. Oh well, let me find a way to run it myself from the code then. Initially, I set a simple goal: put the model in a container and find something that can create an endpoint to work with Open WebUI (a web interface for chatting like ChatGPT), So This blog is to document the my experience for this experiment.

I you want to read Thai Version (อ่านได้ที่นี่)


Table of Contents


Getting Ready to Run microsoft/BitNet

- Linux (I actually tried this in Docker)


I take a Python image and install according to these instructions:
# Use official Python 3.12 imageFROM python:3.12-slim# Install system dependencies for PyTorch and build toolsRUN apt-get update && \ apt-get install -y --no-install-recommends \ build-essential \ cmake \ git \ curl \ ca-certificates \ libopenblas-dev \ libomp-dev \ libssl-dev \ libffi-dev \ wget \ && rm -rf /var/lib/apt/lists/*# (Optional) Set a working directoryWORKDIR /app# Copy your requirements.txt if you have oneCOPY requirements.txt .RUN pip install --upgrade pip && pip install -r requirements.txt
create a requirements.txt file.
fastapi==0.110.2uvicorn[standard]==0.29.0transformers==4.52.4torch==2.7.0numpy==1.26.4accelerate==0.29.0
Run it normally (standard execution)
# Build the imagedocker build -t python-bitNet .# Run the container with port forwarding and mounting your codedocker run -it -p 8888:8888 -v "$PWD":/app python-bitNet /bin/bash
Use DevContainer (I try this method)

- Windows: This one has quite a few steps, for those who like challenges


I say it's challenging because I tried it and got stuck for 2 weeks lol. On Linux, it's done in a flash. For anyone who wants to try, you need to have the following:

  • For Visual Studio, you need to install additional C++ components as follows:



  • Running PowerShell won't work - you have to run it in Developer Command Prompt for VS 2022 or Developer PowerShell for VS 2022.
    The regular Terminal doesn't set all the variables like Path properly, so you'll encounter errors like


Error C1083: Cannot open include file: 'algorithm': No such file or directory

Even though you try to set vcvarsall.bat x64, it's like hit or miss - sometimes it works, sometimes it doesn't.

Note: vcvarsall.bat is located in "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat"


  • Set python lib .tlb in path > if you don't include it, it will crash


fatal error LNK1104: cannot open file 'python312.lib'Add Python Lib to PATH for running C++ libraries used for AI Inference
And after the environment is ready

  • Let set Virtual environment


# Set ENVpython3 -m venv bitnet-env# orpython -m venv bitnet-env

  • Activate Virtual environment


# Linuxsource bitnet-env/bin/activate# Windows - Powershell.\bitnet-env\Scripts\Activate.ps1# Windows - CMD.\bitnet-env\Scripts\activate.bat

  • Install required libraries according to requirements.txt - if you're using the Linux/Docker approach, these dependencies are already included in the Container image.


fastapi==0.110.2uvicorn[standard]==0.29.0transformers==4.52.4torch==2.7.0numpy==1.26.4accelerate==0.29.0pip install --upgrade pip && pip install -r requirements.txtpip install git+github.com/huggingface/transfo…

Writing Code to Use Model from Hugging Face


After resolving all the ENV issues, let's Coding, I mentioned wanting to connect it with OpenWebUI, so I made 2 versions Command Line version / API version

- Command Line


Try writing code using

  • Transformers: For loading pre-trained models from Hugging Face
  • PyTorch: For model inference from transformers (at this point, the my machine specs that only use inference but not good for Train/Fine Tune) The points where PyTorch is used in several parts, such as
    - bfloat16: Uses less memory than FP16 but with less precision
    - return_tensors="pt": Specifies PyTorch tensor format
    - to(model.device): Enables GPU acceleration if CUDA is available


import torchfrom transformers import AutoModelForCausalLM, AutoTokenizermodel_id = "microsoft/bitnet-b1.58-2B-4T"# Load tokenizer and modeltokenizer = AutoTokenizer.from_pretrained(model_id)model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, force_download=True,)# Apply the chat template + Rolemessages = [ {"role": "system", "content": "You are a Senior Programmer."}, {"role": "user", "content": "Can you help me with a coding problem?"},]prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)chat_input = tokenizer(prompt, return_tensors="pt").to(model.device)# Generate responsechat_outputs = model.generate(**chat_input, max_new_tokens=50)response = tokenizer.decode(chat_outputs[0][chat_input['input_ids'].shape[-1]:], skip_special_tokens=True)print("\nAssistant Response:", response)The Command Line version - you'll see that Windows has many limitations
Another version actually adds a loop and keeps asking continuously until you type "Thank you BITNET" - you can see source code here

- API


Note I didn't research what libraries are available that can make our API directly connect to Open WebUI. Initially

I tried to see what connection standards Open WebUI supports first - for the Text Prompt part, it has OpenAI / Ollama sections.

Now I chose to go with OpenAI API because when I tried playing with dotnet semantic kernel before, it has the /v1/chat/completions pattern, so I tried starting from there and tried adding it in WebUI to see what paths it hits in our code.

From what I tested, I found there are 3 API endpoints at minimum that Open WebUI calls to us:

  • /v1/chat/completions
  • /v1/models
  • /health

For /v1/chat/completions, I just kept adding based on what it complained about + asked AI until I completed all 3 APIs like this.
import datetimeimport timeimport uuidfrom fastapi import FastAPI, Requestfrom fastapi.responses import JSONResponsefrom pydantic import BaseModelfrom typing import List, Dict, Optionalimport torchimport uuidfrom datetime import datetimefrom transformers import AutoModelForCausalLM, AutoTokenizerapp = FastAPI()# Load model and tokenizer at startupmodel_id = "microsoft/bitnet-b1.58-2B-4T"tokenizer = AutoTokenizer.from_pretrained(model_id)model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, force_download=True,)device = "cuda" if torch.cuda.is_available() else "cpu"model = model.to(device)class Message(BaseModel): role: str content: strclass ChatRequest(BaseModel): messages: List[Message] max_new_tokens: Optional[int] = 700class Choice(BaseModel): index: int message: Dict[str, str] finish_reason: strclass ChatResponse(BaseModel): id: str object: str created: int model: str choices: List[Choice]@app.post("/v1/chat/completions", response_model=ChatResponse)async def chat_completions(request: ChatRequest): # Prepare prompt using chat template prompt = tokenizer.apply_chat_template( [msg.dict() for msg in request.messages], tokenize=False, add_generation_prompt=True ) chat_input = tokenizer(prompt, return_tensors="pt").to(model.device) chat_outputs = model.generate(**chat_input, max_new_tokens=request.max_new_tokens) response = tokenizer.decode( chat_outputs[0][chat_input['input_ids'].shape[-1]:], skip_special_tokens=True ) # Return response in OpenAI-compatible format # return JSONResponse({ # "id": f"chatcmpl-{uuid.uuid4().hex[:12]}", # "object": "chat.completion", # "created": int(time.time()), # "model": model_id, # "choices": [ # { # "index": 0, # "message": { # "role": "assistant", # "content": response # }, # "finish_reason": "stop" # } # ] # }) return ChatResponse( id=f"chatcmpl-{uuid.uuid4().hex[:12]}", object="chat.completion", created=int(time.time()), model=model_id, choices=[ Choice( index=0, message={"role": "assistant", "content": response}, finish_reason="stop" ) ] )@app.get("/")def root(): """Root endpoint with API info""" return JSONResponse({ "message": "OpenAI-Compatible API for Open WebUI", "version": "1.0.0", "endpoints": { "models": "/v1/models", "chat": "/v1/chat/completions", "health": "/health" } })@app.get("/health")def health_check(): """Health check endpoint""" return JSONResponse({"status": "healthy", "timestamp": datetime.now().isoformat()})@app.get("/v1/models")def list_models(): """List available models""" return JSONResponse({ "data": [ { "id": model_id, "object": "model", "created": datetime.now().isoformat(), "owned_by": "microsoft", "permission": [] } ] })
When using it, I made it into Docker. During build, I was shocked by the size - almost 10 GB.

Tried using it for real and connecting it with Open WebUI - sometimes it gives okay answers, sometimes it hallucinates lol

But what's for sure is the CPU usage shoots up lol

That concludes my rough trial of running the Model, and if I find something better, I'll write another blog post. Feel free to reach out with suggestions. Oh, and don't force it on Windows - mine got squeezed by WSL2. Taking an old notebook and installing Linux to make a Local AI Inference Engine is still faster.

For all the code, I've uploaded it to Git: github.com/pingkunga/python_mi…

Reference


#python #EnglishBlog #SLM #microsoftBitNet #LocalAIModel

Green Energy Utilization in Biochar Production


A Sustainable Approach

In the era of sustainable development, green energy solutions are gaining significant attention as we look for ways to reduce carbon footprints and promote circular economies. One such innovative solution lies in biochar production, a process that not only benefits the environment but also provides a renewable energy source.

Biochar—a form of carbon-rich charcoal produced through the pyrolysis of organic materials—has become a critical component in various environmental and agricultural solutions. However, its production process, particularly biochar pyrolysis machine, offers an often-overlooked opportunity for energy recovery and efficiency.

The Role of Pyrolysis in Energy Recovery
The biochar production process involves heating organic matter, such as agricultural waste, forestry residues, or even municipal solid waste, in a low-oxygen environment. This process, known as pyrolysis, breaks down the material into three key products: biochar, bio-oil, and syngas (synthesis gas). While biochar is the desired product for soil enhancement and carbon sequestration, the other by-products—bio-oil and syngas—serve as potential energy sources.

Biochar pyrolysis machines play a central role in this process. These machines are designed to efficiently convert organic materials into biochar while recovering the energy released during pyrolysis. The bio-oil and syngas produced can be utilized to power the machine itself or be harnessed for other industrial processes, such as electricity generation or heating. This recovery of energy reduces reliance on external power sources, making the biochar production process significantly more sustainable.

Circular Economy and Biochar Production
The concept of a circular economy revolves around maximizing the use of resources while minimizing waste. In biochar production, the pyrolysis process embodies this principle by utilizing waste materials (such as agricultural or forest residues) and converting them into valuable products like biochar, while simultaneously recovering energy in the form of syngas and bio-oil.

By integrating the energy recovery capabilities of biochar pyrolysis machines, manufacturers can achieve a closed-loop system where the energy required for production is largely self-sustained. The use of recovered syngas to fuel the pyrolysis process can significantly reduce energy consumption, thereby lowering operational costs and carbon emissions associated with external energy sources.

Environmental Benefits and Sustainability
Not only does biochar serve as an effective tool for carbon sequestration, but the process itself contributes to environmental sustainability. By capturing and storing carbon during pyrolysis, biochar helps mitigate the effects of climate change. Additionally, the energy recovery aspect of biochar production helps reduce the environmental impact of the process. By utilizing biochar pyrolysis machines, industries can produce biochar while simultaneously reducing their reliance on fossil fuels and lowering their overall carbon emissions.

Conclusion
As the world continues to seek greener, more sustainable solutions, biochar production stands out as a promising technology. The integration of energy recovery mechanisms in biochar pyrolysis machines offers significant advantages, promoting a circular economy where waste is minimized, energy is recovered, and environmental benefits are maximized.

With increasing attention on sustainability, this innovative process could soon become a key player in both environmental management and renewable energy generation, paving the way for a cleaner, more efficient future.

Create a REST API for the Microsoft/BitNet B1.58 model and integrate it with an Open WebUI

Now I'm writing/semi-complaining lol. Normally I use models from Ollama, and I happened to come across this person's X (Twitter). They have a Microsoft model that people say can run on CPU just fine, and if you use something like M2, it'll be even faster. Microsoft just a 1-bit LLM with 2B parameters that can run on CPUs like Apple M2.

naiwaen.debuggingsoft.com/2025…


Create a REST API for the Microsoft/BitNet B1.58 model and integrate it with an Open WebUI


Now I'm writing/semi-complaining lol. Normally I use models from Ollama, and I happened to come across this person's X (Twitter). They have a Microsoft model that people say can run on CPU just fine, and if you use something like M2, it'll be even faster.

Microsoft just a 1-bit LLM with 2B parameters that can run on CPUs like Apple M2.

BitNet b1.58 2B4T outperforms fp LLaMA 3.2 1B while using only 0.4GB memory versus 2GB and processes tokens 40% faster.

100% opensource. pic.twitter.com/kTeqTs6PHd

— Shubham Saboo (@Saboo_Shubham_) April 18, 2025


And that model is microsoft/BitNet b1.58 2B4T. After seeing the news in APR-2025, I waited to see if anyone would try to implement it in Ollama. I saw people asking about it too, but there was still no update.

I waited for a long time until June-2025 and still nothing. Oh well, let me find a way to run it myself from the code then. Initially, I set a simple goal: put the model in a container and find something that can create an endpoint to work with Open WebUI (a web interface for chatting like ChatGPT), So This blog is to document the my experience for this experiment.

I you want to read Thai Version (อ่านได้ที่นี่)


Table of Contents


Getting Ready to Run microsoft/BitNet

- Linux (I actually tried this in Docker)


I take a Python image and install according to these instructions:
# Use official Python 3.12 imageFROM python:3.12-slim# Install system dependencies for PyTorch and build toolsRUN apt-get update && \ apt-get install -y --no-install-recommends \ build-essential \ cmake \ git \ curl \ ca-certificates \ libopenblas-dev \ libomp-dev \ libssl-dev \ libffi-dev \ wget \ && rm -rf /var/lib/apt/lists/*# (Optional) Set a working directoryWORKDIR /app# Copy your requirements.txt if you have oneCOPY requirements.txt .RUN pip install --upgrade pip && pip install -r requirements.txt
create a requirements.txt file.
fastapi==0.110.2uvicorn[standard]==0.29.0transformers==4.52.4torch==2.7.0numpy==1.26.4accelerate==0.29.0
Run it normally (standard execution)
# Build the imagedocker build -t python-bitNet .# Run the container with port forwarding and mounting your codedocker run -it -p 8888:8888 -v "$PWD":/app python-bitNet /bin/bash
Use DevContainer (I try this method)

- Windows: This one has quite a few steps, for those who like challenges


I say it's challenging because I tried it and got stuck for 2 weeks lol. On Linux, it's done in a flash. For anyone who wants to try, you need to have the following:

  • For Visual Studio, you need to install additional C++ components as follows:



  • Running PowerShell won't work - you have to run it in Developer Command Prompt for VS 2022 or Developer PowerShell for VS 2022.
    The regular Terminal doesn't set all the variables like Path properly, so you'll encounter errors like


Error C1083: Cannot open include file: 'algorithm': No such file or directory

Even though you try to set vcvarsall.bat x64, it's like hit or miss - sometimes it works, sometimes it doesn't.

Note: vcvarsall.bat is located in "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat"


  • Set python lib .tlb in path > if you don't include it, it will crash


fatal error LNK1104: cannot open file 'python312.lib'Add Python Lib to PATH for running C++ libraries used for AI Inference
And after the environment is ready

  • Let set Virtual environment


# Set ENVpython3 -m venv bitnet-env# orpython -m venv bitnet-env

  • Activate Virtual environment


# Linuxsource bitnet-env/bin/activate# Windows - Powershell.\bitnet-env\Scripts\Activate.ps1# Windows - CMD.\bitnet-env\Scripts\activate.bat

  • Install required libraries according to requirements.txt - if you're using the Linux/Docker approach, these dependencies are already included in the Container image.


fastapi==0.110.2uvicorn[standard]==0.29.0transformers==4.52.4torch==2.7.0numpy==1.26.4accelerate==0.29.0pip install --upgrade pip && pip install -r requirements.txtpip install git+github.com/huggingface/transfo…

Writing Code to Use Model from Hugging Face


After resolving all the ENV issues, let's Coding, I mentioned wanting to connect it with OpenWebUI, so I made 2 versions Command Line version / API version

- Command Line


Try writing code using

  • Transformers: For loading pre-trained models from Hugging Face
  • PyTorch: For model inference from transformers (at this point, the my machine specs that only use inference but not good for Train/Fine Tune) The points where PyTorch is used in several parts, such as
    - bfloat16: Uses less memory than FP16 but with less precision
    - return_tensors="pt": Specifies PyTorch tensor format
    - to(model.device): Enables GPU acceleration if CUDA is available


import torchfrom transformers import AutoModelForCausalLM, AutoTokenizermodel_id = "microsoft/bitnet-b1.58-2B-4T"# Load tokenizer and modeltokenizer = AutoTokenizer.from_pretrained(model_id)model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, force_download=True,)# Apply the chat template + Rolemessages = [ {"role": "system", "content": "You are a Senior Programmer."}, {"role": "user", "content": "Can you help me with a coding problem?"},]prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)chat_input = tokenizer(prompt, return_tensors="pt").to(model.device)# Generate responsechat_outputs = model.generate(**chat_input, max_new_tokens=50)response = tokenizer.decode(chat_outputs[0][chat_input['input_ids'].shape[-1]:], skip_special_tokens=True)print("\nAssistant Response:", response)The Command Line version - you'll see that Windows has many limitations
Another version actually adds a loop and keeps asking continuously until you type "Thank you BITNET" - you can see source code here

- API


Note I didn't research what libraries are available that can make our API directly connect to Open WebUI. Initially

I tried to see what connection standards Open WebUI supports first - for the Text Prompt part, it has OpenAI / Ollama sections.

Now I chose to go with OpenAI API because when I tried playing with dotnet semantic kernel before, it has the /v1/chat/completions pattern, so I tried starting from there and tried adding it in WebUI to see what paths it hits in our code.

From what I tested, I found there are 3 API endpoints at minimum that Open WebUI calls to us:

  • /v1/chat/completions
  • /v1/models
  • /health

For /v1/chat/completions, I just kept adding based on what it complained about + asked AI until I completed all 3 APIs like this.
import datetimeimport timeimport uuidfrom fastapi import FastAPI, Requestfrom fastapi.responses import JSONResponsefrom pydantic import BaseModelfrom typing import List, Dict, Optionalimport torchimport uuidfrom datetime import datetimefrom transformers import AutoModelForCausalLM, AutoTokenizerapp = FastAPI()# Load model and tokenizer at startupmodel_id = "microsoft/bitnet-b1.58-2B-4T"tokenizer = AutoTokenizer.from_pretrained(model_id)model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, force_download=True,)device = "cuda" if torch.cuda.is_available() else "cpu"model = model.to(device)class Message(BaseModel): role: str content: strclass ChatRequest(BaseModel): messages: List[Message] max_new_tokens: Optional[int] = 700class Choice(BaseModel): index: int message: Dict[str, str] finish_reason: strclass ChatResponse(BaseModel): id: str object: str created: int model: str choices: List[Choice]@app.post("/v1/chat/completions", response_model=ChatResponse)async def chat_completions(request: ChatRequest): # Prepare prompt using chat template prompt = tokenizer.apply_chat_template( [msg.dict() for msg in request.messages], tokenize=False, add_generation_prompt=True ) chat_input = tokenizer(prompt, return_tensors="pt").to(model.device) chat_outputs = model.generate(**chat_input, max_new_tokens=request.max_new_tokens) response = tokenizer.decode( chat_outputs[0][chat_input['input_ids'].shape[-1]:], skip_special_tokens=True ) # Return response in OpenAI-compatible format # return JSONResponse({ # "id": f"chatcmpl-{uuid.uuid4().hex[:12]}", # "object": "chat.completion", # "created": int(time.time()), # "model": model_id, # "choices": [ # { # "index": 0, # "message": { # "role": "assistant", # "content": response # }, # "finish_reason": "stop" # } # ] # }) return ChatResponse( id=f"chatcmpl-{uuid.uuid4().hex[:12]}", object="chat.completion", created=int(time.time()), model=model_id, choices=[ Choice( index=0, message={"role": "assistant", "content": response}, finish_reason="stop" ) ] )@app.get("/")def root(): """Root endpoint with API info""" return JSONResponse({ "message": "OpenAI-Compatible API for Open WebUI", "version": "1.0.0", "endpoints": { "models": "/v1/models", "chat": "/v1/chat/completions", "health": "/health" } })@app.get("/health")def health_check(): """Health check endpoint""" return JSONResponse({"status": "healthy", "timestamp": datetime.now().isoformat()})@app.get("/v1/models")def list_models(): """List available models""" return JSONResponse({ "data": [ { "id": model_id, "object": "model", "created": datetime.now().isoformat(), "owned_by": "microsoft", "permission": [] } ] })
When using it, I made it into Docker. During build, I was shocked by the size - almost 10 GB.

Tried using it for real and connecting it with Open WebUI - sometimes it gives okay answers, sometimes it hallucinates lol

But what's for sure is the CPU usage shoots up lol

That concludes my rough trial of running the Model, and if I find something better, I'll write another blog post. Feel free to reach out with suggestions. Oh, and don't force it on Windows - mine got squeezed by WSL2. Taking an old notebook and installing Linux to make a Local AI Inference Engine is still faster.

For all the code, I've uploaded it to Git: github.com/pingkunga/python_mi…

Reference


#python #EnglishBlog #SLM #microsoftBitNet #LocalAIModel


เมื่อกี้ตอนกลับบ้านสวนกับคอร์กี้ แล้วก็นึกขึ้นมาได้ว่าหมาในญี่ปุ่นที่เจอ น้ำหนมาตรฐานตามไกด์ไลน์แทบทุกตัว ไม่มี compromise ว่าอ้วนกว่านี้อีกหน่อยจะน่ารัก ถ้าคนไทยมาเห็นโกลเด้น ลาบ คอร์กี้ เชลตี้ฯลฯ แถวนี้ คงบ่นว่าน้องผอมน่าสงสาร

ปล. แต่่แมวน้ำหนักเกินนี่เห็นเรื่อยๆ นะ ถถถ

„Kugel ist schussbereit“: Vertrauter des Magdeburg-Amokfahrers kündigt Gewalttat an und taucht unter apollo-news.net/kugel-ist-schu… In einer MDR-Dokumentation, die sich mit dem Anschlag auf den Magdeburger Weihnachtsmarkt 2024 durch den saudischen Arzt Taleb al-Abdulmohsen beschäftigt, ...
The post „Kugel ist schussbereit“: Vertrauter des Magdeburg-Amokfahrers kündigt Gewalttat an und taucht unter appeared

ส่อง 25 ไอเดียแต่งตัวยีนส์ขาสั้น สวยชิค ลุคชิลล์ สบาย ๆ แบบมีสไตล์ digitalmore.co/25-denim-shorts…

ชวนมา Debian Release Party ที่กรุงเทพฯ!

มาร่วมฉลองการเปิดตัว Debian Release ใหม่ล่าสุดกับพวกเราที่กรุงเทพฯ!

อะไร: Debian Release Party - มา "กิน ดื่ม พูดคุย" แลกเปลี่ยนประสบการณ์และทำความรู้จักกับเพื่อนๆ ในคอมมูนิตี้ Debian

ที่ไหน: Jojo Soba ชั้น 2 (ดูแผนที่ได้ที่: openstreetmap.org/way/12212361…)

เมื่อไหร่: วันเสาร์ที่ 9 สิงหาคม พ.ศ. 2568 เวลา 14:00 น. เป็นต้นไป

อย่าพลาดโอกาสดีๆ ที่จะได้มารวมตัวกับคนคอเดียวกัน มาร่วมเป็นส่วนหนึ่งของการเฉลิมฉลองนี้กันนะ!

a bit fruity ตอนล่าสุดพูดถึงประเด็น lgbtq/trans rights คือมันมีอากิวเม้นนึงเกิดขึ้นในสังคมปัจจุบันว่าการเคลื่อนไหวของ lgbtq+ ในปัจจุบันเรียกร้อง "มาก" ไปไหม พยายามทำลายสังคมมากไปป่าว open.spotify.com/episode/52bYI…

ฟังแล้วลองตัดบริบทออกนี่รู้สึกว่าเป็น ep ที่ถามคำถามหลายอย่างกับตัวเรามาก

1. การเคลื่อนไหวเพื่อสิทธิพลเมืองทำยังไงถึงจะ~ถูกต้อง~ ทำยังไงให้สังคมยอมรับ การสร้างบทสนทนาเปิดกว้างมีประโยชน์จริงไหม เราควรเสียเวลาพูดคุยกับคนที่จิตใจเต็มไปด้วยอคติ (bigotry) มั้ย

2. เกิดอะไรขึ้นเมื่อเราบรรลุเป้าหมายแล้วแต่คนอื่นๆที่ไม่ได้อยู่ในจุดเดียวกับเรายังไปไม่ถึงจุดหมายนั้น

in reply to m

4. อันนี้อยู่ในบริบทอยู่ คือรู้สึกว่าเมกาเป็นประเทศที่หมกหมุ่นกับ trans มาก ซึ่งในรายการก็บอกว่าสื่ออะชอบเอาประเด็น trans มาอยู่ในสปอตไลท์ ซึ่งในมุมหนึ่งมันมีประเด็นทางสังคมและปัญหาจริง แต่อีกมุมหนึ่งคือ trans เป็นคนกลุ่มเล็กเมื่อเทียบกับประชากรในประเทศ ไม่ได้ใหญ่พอที่จะ threaten อะไรใครเลย

โดยส่วนตัวนะ เราเกิดในยุค 90 คิดว่าเรื่องเกย์เลสเบี้ยนนี่ชิวมาก แต่พอขยับมาเป็น nb+trans หลายๆครั้งยังมีเรื่องที่รู้สึกไม่เข้าใจ รู้สึกว่ามันเกินความรู้สึกนึกคิดของตัวเองอยู่มาก

in reply to m

เราคิดว่าถ้าเราอยากจะยืนข้างที่ถูกในประวัติศาสตร์ ประเด็นยากๆที่เรา uncomfortable กับมันเยอะๆเช่นประเด็นนี้ต่างหากที่เราต้องเรียนรู้แล้วก็หาข้อสรุปจุดยืนของตัวเอง

ซึ่งตัวเราก็ยังไม่ได้มีความกล้าหรือหนักแน่นที่จะสร้างความเปลี่ยนแปลงอะไรขนาดนั้น แต่ก็จะพยายามต่อไป 😭

อ้างอิงจากบทความในหนังสือเรียน ช่วงก่อนหน้านี้ญี่ปุ่นเคยพยายามจะแก้ปัญหาเด็กกวดวิชาหนักเกินโดยการใช้แนวทางการศึกษา ゆったり教育 หรือการศึกษาแบบชิลๆ เป็นทางเลือกสำหรับพ่อแม่ที่ไม่อยากให้ลูกเรียนหนัก ตัวอย่างคลาสสิกคือ pi =3.142 หรือ 22/7 มันยากไป ให้เรียนแบบ pi =3 ถ้วนๆ ไปเลย
in reply to rarirurero

เราว่าไอเดียตั้งต้นให้เรียนแบบไม่รีบมันดี แต่วิธีออกแบบการสอนมันไม่ควรไปทำให้ pi มันเลขน้อยลงไหม แต่ควรใช้เวลาเยอะๆ กับการทำความเข้าว่า pi มันมายังงัย ถ้าเด็กโปรแกรมปกติเรียนหาพื้นที่กับเส้นรอบวงในหนึ่งคาบ เราเรียน 3-4 คาบก็ได้ แต่ใช้เวลาทำความเข้าใจว่า 22/7 มาจากไหน เล่น ゆったりแบบ pi=3 เลยพังหนัก gap เด็กห่างกว่าเดิม เลยเลิกไปเมื่อทศวรรษที่แล้ว

สรุปแผน Debian Release Party

เมือง: Bangkok, Thailand
กิจกรรม: กิน ดื่ม พูดคุย
สถานที่: Jojo Cafe
เวลา: 14:00น. 9 สิงหาคม พ.ศ. 2568

แต่ว่ายังสมัคร wiki.debian.org/ReleasePartyTr… ไม่ได้ ถ้าเป็นไปได้และถ้าคุณ @thep เข้าได้อยู่แล้วอยากจะขอรบกวนด้วยครับ

@thep
in reply to Vee: ดิจิทัล

มีอะไรจะเติมใน field ต่างๆ ไหมครับ?
* '''When:''' 9 August 2025, 14:00 local time
* '''Where:''' Jojo Cafe ([[https://www.openstreetmap.org/way/1221236172|map]])
* '''What:''' eat, drink, chat
* '''Provided:''' anything provided by the organisers like cake, balloons, food, drink, silly hats
* '''Bring:''' anything people should bring, like food, drink, costumes, OpenPGP fingerprints, etc
* '''More info:''' link to the page where people can get more information and/or discuss event
in reply to thep

''Provided:''' ไม่มีอะไรให้เลยครับ แต่ถ้าจะส่งมาก็ได้นะครับ

'''Bring:''' ไม่มีเหมือนกันครับ มาตัวเปล่ากับเงินค่ากาแฟก็พอ

* '''Promotion:''' [Thai] mastodon.in.th/@veer66/1149014…
[English] mstdn.io/@veer66/1149014882292…

* '''Reports:''' thailinuxusergroup.freeforums.…


ชวนมา Debian Release Party ที่กรุงเทพฯ!

มาร่วมฉลองการเปิดตัว Debian Release ใหม่ล่าสุดกับพวกเราที่กรุงเทพฯ!

อะไร: Debian Release Party - มา "กิน ดื่ม พูดคุย" แลกเปลี่ยนประสบการณ์และทำความรู้จักกับเพื่อนๆ ในคอมมูนิตี้ Debian

ที่ไหน: Jojo Soba ชั้น 2 (ดูแผนที่ได้ที่: openstreetmap.org/way/12212361…)

เมื่อไหร่: วันเสาร์ที่ 9 สิงหาคม พ.ศ. 2568 เวลา 14:00 น. เป็นต้นไป

อย่าพลาดโอกาสดีๆ ที่จะได้มารวมตัวกับคนคอเดียวกัน มาร่วมเป็นส่วนหนึ่งของการเฉลิมฉลองนี้กันนะ!


เขาทาเทยามะนี่กระแสหลักเขาไปหน้าร้อนกับใบไม้ร่วงกัน อินฟลูฯ ไทยชอบพูดเหมือนช่วงพีคคือช่วงที่มีกำแพงหิมะ นั่นเขาเอาไว้ให้ทัวร์ต่างประเทศดู😜

ตัวอย่าง TV Anime "Souzai Saishuka no Isekai Ryokouki" โดย Tatsunoko Production x SynergySP เริ่มออกอากาศภายในเดือน ต.ค. 2025

OP: Prologue by Nornis

- Nobunaga Shimazaki➠Takeru
- Ayasa Ito➠Bee
- Makoto Koichi➠Brolite

#souzaisaishuka

As a Meta employee, I can honestly tell you what we know, and I do not know how we obtain all of it.

* Your full name
* Your full home address
* Your phone number
* Your e-mail
* Your government ID
* Your consumer report history
* The name of every family member
* The name of every friend
* The name of their family / friends
* Your marital status
* If you are faithful to your partner
* Your work history (all of it)
* Your education history (all of it)
* Your travel history (going back years)
* Your birth gender
* Your gender ID
* Your sexuality
* Your sexual preferences
* How often you're having sex
* Your partner's details (all the above)
* Your political ideology
* Your involvement with any group
* If you protest, we know
* If you're unhappy, we know

The amount of information we collect on you is insane. And we do it all for supposedly marketing and yes, we help the government since they have access to all this too.

So when someone says they want to avoid META or GOOGLE - respect.

This entry was edited (2 weeks ago)
in reply to Linux Is Best

Right now, if you have an Android Phone and have any META apps -- Without opening them, check if they are running.

Power off and power on your phone (reboot), and they will still be running on their own.

Proceed to put your phone down on the table without opening the app and talk about something random. DYI Projects, for example. Do this for an hour or so, and wait.

Your META apps will start showing you ads for that topic if there is a market for it. We're always listening -- Always!

มังงะของอาจารย์ Otsuji ในชื่อ 'คุณแม่ขา ครอบครัวใหม่ไม่รังแกหนู' (Ibitte Konai Gibo to Gishi) ประกาศดัดแปลงเป็นอนิเมะทีวี

เริ่มฉายในปี 2026/2569
กำกับโดย Inoue Keisuke จากสตูดิโอ NEWON

มังงะวางจำหน่ายในไทยโดย เซนชู

Argon are well known for their high-quality Raspberry Pi cases and accessories. Now they're making a 14-inch laptop powered by a Compute Module 5.

omgubuntu.co.uk/2025/07/argon-…

#RaspberryPi #linux #tech

Singapore Pain Solutions: Athlete Recovery Strategies


Sports participation enhances physical fitness and mental well-being, but it can also lead to injuries or discomfort requiring effective recovery strategies. At Singapore Pain Solutions, athletes—whether recreational or competitive—can explore therapeutic options to manage pain and return to their routines. This article discusses methods like chiropractic care, sports massage, and home-based remedies that support athletes in addressing sports-related pain and improving mobility.

1. Chiropractic Care for Back Pain Relief


Chiropractic treatment for back pain at Singapore Pain Solutions offers a non-invasive approach to address spinal misalignments caused by sports activities like running or weightlifting. Through manual adjustments, chiropractors correct these misalignments, reducing discomfort and improving movement. This method is valuable for athletes seeking drug-free solutions to manage pain and maintain performance.

Singapore Pain Solutions: Athlete Recovery Strategies

Many ask, “Are chiropractors doctors?” Chiropractors at Singapore Pain Solutions hold a Doctor of Chiropractic degree, earned through extensive musculoskeletal training, qualifying them as licensed professionals. Their expertise makes them well-suited for treating sports injuries, and their services are popular among Singapore athletes for tailored recovery plans that can also include SG Pain Solutions sports massage Singapore therapy.

The treatment process begins with a thorough evaluation to identify pain sources, such as muscle strain or compressed vertebrae. Singapore Pain Solutions may incorporate treatment for spinal decompression to address issues like herniated discs, which can contribute to back pain. This combination supports spinal health and helps athletes resume activities with improved comfort.

2. Sports Massage for Muscle Recovery


Sports massage Singapore services at Singapore Pain Solutions target muscle groups stressed during activities like cycling or tennis. This therapy reduces muscle tension, enhances circulation, and improves flexibility, helping prevent injuries and support recovery. Unlike general massage, sports massage focuses on performance and rehabilitation needs.

Regular sessions help maintain muscle health, particularly after intense training or competitions. Techniques like deep tissue massage address tightness, promoting relaxation and reducing recovery time. Athletes find that incorporating sports massage Singapore into their routine at Singapore Pain Solutions minimizes downtime and enhances performance.

For spinal concerns, combining sports massage with treatment for spinal decompression can improve outcomes. The massage relaxes muscles around the spine, facilitating decompression therapy, which relieves pressure on nerves and discs. This integrated approach at Singapore Pain Solutions addresses both muscular and spinal issues effectively.

3. Neck Pain Management with Chiropractic Care


Chiropractic treatment for neck pain at Singapore Pain Solutions is a practical option for athletes experiencing discomfort from repetitive motions in sports like swimming or martial arts. Chiropractors use precise adjustments to correct cervical spine misalignments, helping alleviate tension and restore range of motion. This approach supports neck flexibility and reduces pain.

The treatment often includes exercises to strengthen neck-supporting muscles, promoting long-term relief. These exercises, tailored by chiropractors, ensure safe outcomes for individual conditions. Athletes can incorporate these techniques into their routines to maintain neck health and prevent further discomfort.

Understanding chiropractor Singapore price at Singapore Pain Solutions is key for athletes planning care. Costs vary based on location, session frequency, and expertise, but affordable packages are often available. Researching their services ensures athletes receive care aligned with their recovery goals and budget.

4. Home-Based Knee Pain Solutions


Knee pain is a common issue for athletes, particularly with aging, and knee pain treatment at home offers accessible solutions. Simple knee pain home remedies, like the RICE method (Rest, Ice, Compression, Elevation), can reduce inflammation and ease discomfort. These methods are practical for managing mild to moderate knee issues in daily routines.

Learning how to use turmeric for knee pain is a popular approach, as turmeric’s anti-inflammatory properties support joint health. Natural remedies for knee pain in old age, such as applying turmeric or ginger topically, reduce stress on the knees. Low-impact exercises, like gentle yoga or walking, strengthen supporting muscles, promoting long-term joint function.

For those exploring the best medicine for knee pain in old age, non-drug options like a knee pain home remedy are often recommended initially. Home remedies for knee pain, such as using a compression wrap or heat therapy, can provide relief safely. Consulting a healthcare provider ensures these strategies are appropriate for individual conditions.

5. Exercises for Tailbone Pain Relief


Tailbone pain, often triggered by falls or prolonged sitting in sports like cycling, can be managed with tailbone pain exercises. Gentle stretches, such as pelvic tilts or yoga poses like Child’s Pose, help relieve pressure on the coccyx and improve flexibility. These exercises are practical for athletes to include in their recovery routines.

Chiropractors or physical therapists at Singapore Pain Solutions often recommend specific stretches tailored to individual conditions for effective relief. Consistent practice can reduce discomfort and prevent further irritation, supporting athletes in maintaining active lifestyles. Combining these with professional treatments enhances recovery outcomes.

A comprehensive recovery plan at Singapore Pain Solutions, integrating chiropractic care, sports massage, and home-based strategies, offers a balanced approach. By addressing back, neck, knee, and tailbone issues through professional treatments and targeted exercises, athletes can reduce pain, improve mobility, and enjoy their sports with greater confidence.

Space Dingus is a spiritual sequel to Death Road to Canada gamingonlinux.com/2025/07/spac…

#SpaceDingus #IndieGames #Gaming #PCGaming

Splitgate 2 is "going back to beta" with the original shutting down as 1047 Games have layoffs gamingonlinux.com/2025/07/spli…

#Splitgate2 #Splitgate #FPS #PCGaming #Gaming

เอพริล: พ่อเพื่อนคนนี้แด๊ดดี้พันธุ์แซ่บ
ถ้าตอนนี้คุณคือ... เอพริล มอร์แกน ทายาทสาวของมหาเศรษฐีที่มีรูปร่างเย้ายวนชวนมอง ผิวขาวผ่องราวไข่มุก ปากอิ่มแดงราวกลีบกุหลาบ ดวงตากลมโตเป็นประกายท้าทายทุกสิ่ง และชอบทำในสิ่งที่ใจปรารถนา
คุณจะเลือกใคร...คนไหน?
ถ้าสามหนุ่ม สามวัย จากตระกูลขุนนางเก่า เดออน กอนเดอแวน ที่ทั้งเถื่อน ร้าย เร่าร้อน และร้อนแรง มาวอแวตามจีบคุณ

[1] ตอนที่ 1: เปลวไฟใต้เงามืด
แสงไฟนีออนสีม่วงจากคลับหรูละลายรวมกับเสียงดนตรีเร้าใจที่ดังกระหึ่มจนพื้นสะเทือน เอพริล มอร์แกน สูดลมหายใจเข้าลึก กลิ่นแอลกอฮอล์และน้ำหอมราคาแพงเคล้าคละกันจนเวียนหัว แต่เธอกลับยิ้ม #เตรียมพบกันเที่ยงคืนนี้นะคะ #นิยาย #ebook

in reply to แมงมุมใต้เตียง นิยายรสแซ่บ.🌶️⋆

[3] และเสื้อผ้าที่ถูกปลดทิ้งระเกะระกะบนพื้นเย็นเฉียบ กลิ่นไวน์แดงค้างแก้วปะปนกับกลิ่นกายหอมกรุ่นของเอพริลเคล้าคลึงไปกับกลิ่นกายอบอวลของผู้ชายของแฟรงค์ มือหนาของแฟรงค์ไล้ไปตามแผ่นหลังเนียนเปลือยเปล่าของเอพริลที่บิดเร่าในอ้อมกอด เสียงหอบหายใจกระเส่าดังระงมไปทั่วห้องยามที่เรียวขาของหญิงสาวเกี่ยวรัดรอบเอวสอบของเขาอย่างไม่ยอมถดถอย ทุกสัมผัสเต็มไปด้วยความร้อนแรงและกระหาย เสียงครางแผ่วเบาเล็ดลอดออกจากริมฝีปากอิ่มที่คลอเคลียอยู่กับซอกคอของแฟรงค์ แผ่นหลังของเอพริลแอ่นโค้งรับการโลมไล้ที่ละเลงความปรารถนาลงบนผิวเนื้อนุ่มหยุ่น แฟรงค์กดจูบลงบนหัวไหล่เปลือยเปล่า ลากไล้ลงมาตามกระดูกไหปลาร้าก่อนจะใช้ปลายลิ้นแตะลง #นิยาย

Extreme sports game Descenders Next has released into Early Access gamingonlinux.com/2025/07/extr…

#Descenders #DescendersNext #Gaming #PCGaming

Skip running multiple commands to update packages on Ubuntu with TopGrade, a CLI tool able to upgrade everything, from apt to snap, vscode plugins to pip – all with just a single command.

omgubuntu.co.uk/2025/07/topgra…

#linux #ubuntu #opensource

Hellraiser game announced with Clive Barker's Hellraiser: Revival gamingonlinux.com/2025/07/hell…

#Hellraiser #Gaming #PCGaming #HorrorGames

Co-op climbing game PEAK is a truly great time with friends gamingonlinux.com/2025/07/co-o…

#PEAK #IndieGames #Gaming #PCGaming #Linux #LinuxGaming

AI-assisted tab groups, reduced memory usage on Linux, unit conversions in the address bar AND expanded PiP support – it's Firefox 141.

omgubuntu.co.uk/2025/07/firefo…

#firefox #opensource #foss

Whimsical exploration-trading adventure Townseek gets a big demo refresh gamingonlinux.com/2025/07/whim…

#IndieGames #Demo #PCGaming #Gaming #Steam

Dwarf Fortress gets much more powerful modding in the latest update gamingonlinux.com/2025/07/dwar…

#DwarfFortress #Gaming #PCGaming

Valve restores the Steam page for Old School Rally after it got hit with a DMCA gamingonlinux.com/2025/07/valv…

#OldSchoolRally #IndieGame #SteamDeck #Gaming

Build up a shop and make potions in the cute Penny for Your Potion gamingonlinux.com/2025/07/buil…

#IndieGames #Gaming #PCGaming #LinuxGaming