Portainer : A Guide to Managing your Docker Containers

Exposing this Service Publicly on the Internet

Table of contents

No heading

No headings in the article.

Portainer is a free and open-source tool to manage all your docker containers. It has a nice and clean Web UI which you can easily install on a server via a docker run command, use a Docker Compose file or can even be deployed in a full Kubernetes Environment.

In this blog lets have a look at how to easily install Portainer on an Ubuntu Server and use it to manage our Docker resources on the local server. We will also look at how to Expose this service publicly on the internet with trusted SSL certificates by using a Reverse Proxy.

And I hope you are already familiar with the concept of Containerization with Docker in general because, if you want to use Portainer, you somehow need to install Docker first on your Ubuntu Server.

So, if you don't know anything about Docker and how to install that on a server, check out the Docker's official documentation before going further. And if you want to deploy this in a cloud environment then just use a Cloud Provider where you can easily install a one-click deployment of Ubuntu with Docker & Docker Compose pre-installed so that's pretty easy.

Okay guys, so I have already prepared a cloud instance on Digital Ocean with Ubuntu, Docker and Docker Compose pre-installed. Now, if we want to install Portainer on the server to manage our Docker containers we can easily run this in a Docker container.

1. First of all, we need to create a New Volume to store our data persistently on the system so that can be easily done by executing :

docker volume create protainer_data
  • So, that will create a New Volume we can just use to store our data inside the Docker Container and to run Portainer on this particular server, just execute :
    docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce
    
  • that will automatically run our Container, we can check with docker ps command if the Container is running.

2. And now, we simply need to access the Public IP address of this Container. Run ip a and look for something like 206.180.54.117 within the list your respective communication device. Just copy it and open it inside a Web Browser. Note that we are using an unencrypted connection via http because Portainer by default doesn't use any SSL Certificate or something like that. So, we will later have a look at how to properly Expose this Service to the Public Internet with Reverse Proxy. But first, lets start with the basic installation of Portainer.

image.png

  • pick a secure password and click on Create user

3. So now we need to select our Portainer Environment. As I mentioned, you can even deploy this in a full Kubernetes Environment which is absolutely awsome or we cann connect it to a Remote Agent Machine so that you can manage multiple Portainer Servers with just one single instance. But, in this case we want to manage our local Docker Environment

image.png

  • It also tells us we need to make sure that this Volume is appended to the Docker Container, note that we have done that before with the docker run command. So, lets click on Connect and that will automatically connect to the Portainer instalation to our Local Resources.

image.png

  • You can see that it already has one Container running and this is because Portainer is always accessing everything that is running inside the Container even if this was not deployed via Portainer itself. So you can also add other servers here to manage everything in one single Web UI which is pretty awesome, you only need to register in your endpoint if you want to do that and then you can simply decide how you want to deploy this. If you want to use a Port Agent that is running on the machine or if you want to use the Portainer Edge Agent or you can basically just use the Docker api where you can just connect to a Server or manage any Azure Resources. So, that is pretty cool, I've done that to manage multiple Servers in one single Portainer Instance.

3. Now let's see how to manage our Local Docker Resources by going back to the home Dashboard and click on the Local button which takes us to our Local Instance Dashboard.

image.png

  • You can see, we have one Container running which is Portainer and here you can simply manage all Containers that are running on this Machine. You can start, stop, kill, restart, inspect anything you want and can also add new containers by selecting prefered Images. You can do basically everything that you can also do via Docker CLI.

One feature i absolutely love about Portainer is that you can also define some Application Templates. For example, if you want to deploy a Container or you quickly want to deploy a Database Server, a Web Server or any well-known application you can use those App Templates to easily and quickly deploy our Containers. So, on the App Templates for example, we can easily deploy an httpd Server with Apache or using Nginx Server, MySQL Databases and other Applications that are well known and that is pretty cool.

Also, you also can define some Custom Templates

image.png

  • when you click on add custom templates you can define those general options here and you can also add some Docker Compose file options here that is pretty cool and you can also use to to deploy entire Stacks. And we will use this feature to deploy a Stack of a Reverse Proxy to Expose our data to the Public Internet.

As I mentioned, unencrypted http traffic may be feasible for a Local Test Environment, but if you want to Expose this to the Public Internet you usually want to Encrypt your data with https and want to obtain a trusted SSL Certificate.

4. Head on to Stacks and Create Stack with name nginxproxymanager

image.png

In my case, I am using this Dokcer Compose file :

version: '3'
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    ports:
      - '80:80'
      - '81:81'
      - '443:443'
    environment:
      DB_MYSQL_HOST: "db"
      DB_MYSQL_PORT: 3306
      DB_MYSQL_USER: "npm"
      DB_MYSQL_PASSWORD: "npm"
      DB_MYSQL_NAME: "npm"
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt
  db:
    image: 'jc21/mariadb-aria:latest'
    environment:
      MYSQL_ROOT_PASSWORD: 'npm'
      MYSQL_DATABASE: 'npm'
      MYSQL_USER: 'npm'
      MYSQL_PASSWORD: 'npm'
    volumes:
      - ./data/mysql:/var/lib/mysql
  • select Web Editor and paste it inside the box below then click on deploy the Stack. That will automatically create and deploy those two Containers and set up everything like is described in the Docker Compose file.

As you can see our Nginx Proxy Manager is now running, we have total control over the Stack and if we click on that, we can stop, delete it or change some information about this

image.png

  • We can see the two containers. First one is the Application and second one is its Database. It also has published ports 443, 80, and the 81 is the Web UI of the Nginx Proxy Manager.

5. Now, go to the endpoints > edit local endpoints, we will use the Public IP that we used before and update the endpoint.

image.png

  • Now, if we click on port 81, that will take us to the Public IP address on port 81. Note that this is also an Unsecured connection, of course you can create a Proxy Host for the Web UI & the Portainer UI.

image.png

  • setup the secured credentials as it asks before we go further.

image.png

  • enter the Domain that you obtained, enter portainer inside Forwarded Hostname/IP, set Forwarded Port to 9000. Also, enable Block common Exploits for more security.

image.png

  • Enter Request a new SSL Certificate and enable the toggles as seen in the image above and click on Save.

So, that will try to obtain a trusted SSL Certificate and add the Proxy Host and we can now connect to our Portainer Web UI.

We have deployed the Nginx Proxy Manager in a Docker Compose Stack that will automatically create a New Network and attach the Nginx Proxy Manager to an isolated Docker Network. The Portainer Container is running on a separate Docker Network, so we need to make sure that all the Docker Containers you want to Expose via the Reverse Proxy are running in the same Docker Network.

We need to redeploy our Portainer Container and you can easily do this via the docker run command from the CLI. We also want to close down the port 9000, because otherwise people can still access the Portainer Container via the port 9000 over http that is something we don't want so, let's go back to our Server and execute :

docker stop portainer
rm portainer

Now we check which Docker Networks are currently running via

docker network ls

You will see nginxproxymanager_default Network name and we want to copy that to use it in our next command.

docker run -d -p 8000:8000 --network nginxproxymanager_default --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce
  • this time we didn't Expose the port 9000 directly to the Public Internet because, we want to do this via the reverse proxy and we also want to add a Network and put this Portainer Container in the same Network as the Nginx Proxy Manager

So, now let's go back to our Nginx Proxy Manager and Click on Proxy Hosts generated link, we will get this :

image.png

  • And you should see now we have access to the Portainer Container that we are running via https, the connection is Secured. We can now basically just use the credentials we have created to log into the port in the Web UI and you can see everything is running fine.

image.png

So, this is how you can easily Expose the Portainer Web UI securely to the Public Internet via the Nginx Proxy Manager.

I think Portainer is a really very cool software, it's really amazing and easy to set up Docker Compose files or Single Containers and manage your entire Portainer Server.

If this blog helps you in any way please share, tag me @VrashTwt if you tweet about it.

thanks to, Portainer Documentation, The Digital Life 's Youtube Channel, Saiyam Pathak for the awesome resources and support.