Ultimate Guide to Deploying .NET on Linux for Production (2026)

Introduction

A comprehensive, security-first manual for deploying high-performance .NET applications on Ubuntu and Debian servers.

Deploying .NET applications to a production Linux environment requires a balance of performance, maintainability, and extreme security. While many tutorials suggest running applications as root or using default directories, professional DevOps standards dictate a more robust approach.

To follow along with the code used in this guide, you can check this source code on GitHub (External Reference).

The Architecture of a Professional Deployment

Before executing commands, it is vital to understand the request flow. In a production-grade setup, your .NET application (Kestrel) should never be exposed directly to the public internet. Instead, we use Nginx as a "Shield."

  • Nginx: Handles HTTPS, request filtering, and static file caching.
  • Kestrel: The high-performance internal web server provided by Microsoft.
  • Systemd: The Linux init system that ensures your app stays running 24/7.

Prerequisites: Preparing and Hardening the Linux Environment

Before proceeding, your server must be hardened and Nginx must be properly installed. We strongly recommend following our Server Hardening and Nginx Installation Guide to ensure your firewall (UFW) is configured correctly.

Once your environment is hardened, verify that the .NET SDK and Runtime are configured.

dotnet --list-sdks
dotnet --list-runtimes

If you haven't installed .NET yet, follow our Multi-SDK Installation Guide.


Step 1: Secure Directory & Permission Strategy

Professional Linux administrators avoid /var/www/html for application logic. Depending on your deployment workflow, your directory structure will differ:

Option A: Manual Deployment (Learning Strategy)

We use a dual-folder structure to separate source code from compiled binaries. This prevents .git folders or developer artifacts from being accessible by the web server.

# Create the learning structure within the home directory
sudo mkdir -p /home/$USER/webapps/DemoApp/source
sudo mkdir -p /home/$USER/webapps/DemoApp/publish

Option B: Automation Deployment (CI/CD Strategy)

In a professional CI/CD pipeline (e.g., GitHub Actions), the build process happens on a remote server. The pipeline then pushes only the final binaries to a single production folder.

# Create the production automation folder
sudo mkdir -p /home/$USER/webapps/DemoApp.com

Regardless of the method, always set ownership to your standard system user (Non-Root):

sudo chown -R $USER:$USER /home/$USER/webapps/DemoApp
sudo chmod -R 755 /home/$USER/webapps/DemoApp

Step 2: Source Management & Optimized Publishing

For this manual guide, navigate to your source directory and clone the external repository:

cd /home/$USER/webapps/DemoApp/source
git clone https://github.com/SaraRasoulian/DotNet-WebAPI-Sample.git .
dotnet publish -c Release -o /home/$USER/webapps/DemoApp/publish --runtime linux-x64 --self-contained false

Fixing Project File Location Errors

When working with complex repositories, a common error is MSB3202: The project file was not found. This happens when the .csproj is nested inside a subfolder, as seen in the following error log:

To resolve this, explicitly point to the path of the WebAPI.csproj file:

# Optimized production publishing pointing to the specific project path
dotnet publish src/WebAPI/WebAPI.csproj -c Release -o /home/$USER/webapps/DemoApp/publish --runtime linux-x64 --self-contained false

Step 3: Background Automation with Systemd

Systemd ensures your application restarts automatically if it crashes. Create the service file: sudo nano /etc/systemd/system/DemoApp.service

[Unit]
Description=.NET Web API Production Service
After=network.target

[Service]
# Ensure WorkingDirectory matches your chosen strategy (publish or DemoApp.com)
WorkingDirectory=/home/YOUR_USERNAME/webapps/DemoApp/publish
ExecStart=/usr/bin/dotnet /home/YOUR_USERNAME/webapps/DemoApp/publish/WebAPI.dll
Restart=always
RestartSec=10
KillSignal=SIGINT
SyslogIdentifier=dotnet-demo-app
User=YOUR_USERNAME

# Environment Variables for Production
Environment=ASPNETCORE_ENVIRONMENT=Production
Environment=ASPNETCORE_URLS=http://localhost:5000

[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable DemoApp
sudo systemctl start DemoApp
sudo systemctl status DemoApp

Step 4: Securing Production Secrets (appsettings.json)

In a production environment, your appsettings.json should only contain non-sensitive defaults. Hardcoding connection strings or API keys inside this file is a major security risk. Real secrets should be injected via Environment Variables to ensure they remain safe on the server even if source code is compromised.

1. The Production appsettings.json (Template)

Your appsettings.json in the publish directory should use placeholders for sensitive data:

{
{
  "ConnectionStrings": {
    "Database": "Host=localhost;Port=5433;Database=CustomerLoyaltyDB;Username=postgres;Password=mysecretpassword;",
    "Redis": "localhost:6379"
  },
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.AspNetCore": "Warning"
    }
  },
  "Serilog": {
    "Using": [
      "Serilog.Sinks.File"
    ],
    "MinimumLevel": {
      "Default": "Information"
    },
    "WriteTo": [
      {
        "Name": "File",
        "Args": {
          "path": "../logs/webapi-.log",
          "rollingInterval": "Day",
          "outputTemplate": "{Timestamp:yyyy-MM-dd HH:mm:ss.fff zzz} {CorrelationId} {Level:u3} {Username} {Message:lj}{Exception}{NewLine}"
        }
      },
      {
        "Name": "Console",
        "Args": {
          "outputTemplate": "{Timestamp:yyyy-MM-dd HH:mm:ss} [{Level:u3}] {Message}{NewLine}{Exception}"
        }
      }
    ]
  },
  "JwtSettings": {
    "Issuer": "http://localhost:5000/",
    "Audience": "http://localhost:5000/",
    "Key": "This is a sample secret key - please don't use in production environment"
  },
  "AllowedHosts": "*"
}

2. Injecting Secrets via Systemd

The most secure way to provide these secrets on Linux is through the Systemd service file. .NET automatically maps environment variables prefixed with ASPNETCORE_ or variables that follow the JSON hierarchy using double underscores (__) as a delimiter.

Edit your service file again (sudo nano /etc/systemd/system/DemoApp.service) and append your production secrets under the [Service] section:

[Service]
# ... existing configuration ...

# Database Connection String Secret
Environment=ConnectionStrings__DefaultConnection="Server=127.0.0.1;Database=RealProdDb;Port=3306;User Id=db_admin;Password=YourExtremelyComplexPassword123!;"

# Other API Secrets
Environment=ExternalServices__ApiKey="prod-live-key-abc-123"
Environment=JWT__SecretKey="high-entropy-random-string-here"

# ... rest of the file ...

3. Applying Configuration Changes

After updating your service file with new secrets, reload the daemon and restart the service for the changes to take effect:

sudo systemctl daemon-reload
sudo systemctl restart DemoApp

Step 5: Advanced Nginx Tuning for Performance

A professional Nginx configuration protects Kestrel from direct exposure. We will create a dedicated configuration file and enable it by linking it to the active sites directory.

Create the configuration file:

sudo nano /etc/nginx/sites-available/DemoApp.conf

Paste the following configuration:

server {
    listen 80;
    server_name demoapi.devopsfix.com;

    location / {
        proxy_pass http://localhost:5000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection keep-alive;
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Enable the site and restart Nginx:

# Create a symbolic link to sites-enabled
sudo ln -s /etc/nginx/sites-available/DemoApp.conf /etc/nginx/sites-enabled/

# Test the configuration and restart
sudo nginx -t
sudo systemctl restart nginx
sudo systemctl status nginx

Step 6: Enforcing HTTPS with Let's Encrypt

Certbot automates the acquisition and renewal of SSL certificates. In 2026, HTTP is no longer an option for professional deployment.

Important Verification: Certbot and Let's Encrypt require your server to be publicly accessible over the internet with a valid public DNS record pointing to your server's IP address. If you are attempting this on a local machine (localhost) or a strictly private internal network, Let's Encrypt will be unable to verify your domain and the process will fail. For private or local remote servers, you must use self-signed certificates instead.

# Install Certbot and the Nginx plugin
sudo apt install certbot python3-certbot-nginx -y

# Run Certbot to acquire and install the SSL certificate
sudo certbot --nginx -d demoapi.devopsfix.com

Expert DevOps Best Practices Table

Management Task Standard Procedure DevOps Benefit
Security Updates Run unattended-upgrades Closes vulnerabilities automatically.
Logging Use journalctl -u DemoApp Quickly identifies runtime errors.
Privileges Run as $USER, not Root Prevents total system compromise.

Troubleshooting the Production Stack

Engineering roadblocks are common. Use the following steps to diagnose common errors:

  • 502 Bad Gateway: This error indicates Nginx is running but cannot communicate with your .NET application. If you encounter this, follow our detailed guide on Fixing 502 Bad Gateway Nginx .NET Errors.
  • 403 Forbidden: Usually a directory permission issue. Ensure execute permissions on all parent folders.
  • App Crashes: Check internal logs: journalctl -u DemoApp -f.

Frequently Asked Questions (FAQ)

1. Why use Nginx?

It provides request buffering, SSL termination, and a unified entry point.

2. How to manage secrets?

As detailed in Step 4, use Linux Environment Variables in the Systemd file using the Environment= directive to prevent hardcoded credential leakage in your source code.

Final Conclusion: By following this guide, you have ensured your .NET application is safe, scalable, and easy to manage in a professional landscape.

f X W