So I have been working on removing Google services from my life. Some major services include GDrive, Calendar, Gmail, and Google Photos. Of course trying to host all these myself will be expensive right? It actually doesn't have to be. Nextcloud is all in one solution that helps with most of above out-of-the-box. Email is the odd-man-out but I will taking a look at that in a future post.
Nextcloud is an excellent open-source solution for self hosting files in the cloud. Not only does it provide storage, sharing, and search functionality, but also provides a calendar, contacts, mail, and much more through the market place. There are even capabilities to edit office documents in the browser, just like google drive. Without further delay, let's install Nextcloud.
ℹ️
I will be installing Nextcloud to Digital Ocean. However, you can install Nextcloud on your cloud of choice.
Creating the infrastructure
Using Terraform
Before we create the infrastructure, here is what the project structure looks like. For now we will just be working in the nextcloud.tf file.
.
├── nextcloud.tf
We will be creating the infrastructure using Terraform. Terraform is a great tool that allows us to automate infrastructure management. We will be using the following Terraform spec to create a project, a server, a file storage bucket, and an alert.
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
}
}
# Set the variable value in *.tfvars file
# or using -var="do_token=..." CLI option
variable "do_token" {
type = string
}
variable "do_spaces_access_id" {
type = string
}
variable "do_spaces_secret_key" {
type = string
}
variable "ssh_key_id" {
type = number
}
variable "alert_email" {
type = string
}
# Configure the DigitalOcean Provider
provider "digitalocean" {
token = var.do_token
spaces_access_id = var.do_spaces_access_id
spaces_secret_key = var.do_spaces_secret_key
}
resource "digitalocean_droplet" "[server_name]" {
image = "ubuntu-20-04-x64"
name = "nextcloud-server-1"
region = "tor1"
size = "s-1vcpu-2gb"
monitoring = true
ssh_keys = [var.ssh_key_id]
tags = ["document", "nextcloud"]
droplet_agent = true
graceful_shutdown = true
}
resource "digitalocean_spaces_bucket" "[bucket_name]" {
name = "com.sfisoftware.documents"
region = "nyc3"
}
resource "digitalocean_project" "[project_name]" {
name = "Documents"
description = "Project for housing documents."
purpose = "Document hosting"
environment = "Production"
resources = [
digitalocean_droplet.nextcloud_server.urn,
digitalocean_spaces_bucket.sfisoftware_documents.urn
]
}
resource "digitalocean_monitor_alert" "cpu_alert" {
alerts {
email = [var.alert_email]
}
window = "5m"
type = "v1/insights/droplet/cpu"
compare = "GreaterThan"
value = 70
enabled = true
entities = [digitalocean_droplet.nextcloud_server.id]
description = "Alert about CPU usage"
}
Run the init command in your project directory to download and install any providers.
terraform init
You should change the resource names to your liking. Make sure you have API access to your cloud of choice. For me I have an API token in my environment which I will be using. Also, do_spaces_access_id and the do_spaces_secret_key are for the S3 like bucket access. This bucket can be used from within Nextcloud for easy expandable storage. Now let's run the following command to start the creation process.
terraform plan \
-var "alert_email=$ALERT_EMAIL" \
-var "do_token=$DIGITAL_OCEAN_TOKEN" \
-var "ssh_key_id=$DIGITAL_OCEAN_SSH_KEY_ID" \
-var "do_spaces_access_id=$DIGITAL_OCEAN_SPACES_ACCESS_ID" \
-var "do_spaces_secret_key=$DIGITAL_OCEAN_SPACES_SECRET_KEY"
Code block for Terraform plan command
Now the plan is created let's apply it to the cloud.
terraform apply \
-var "alert_email=$ALERT_EMAIL" \
-var "do_token=$DIGITAL_OCEAN_TOKEN" \
-var "ssh_key_id=$DIGITAL_OCEAN_SSH_KEY_ID" \
-var "do_spaces_access_id=$DIGITAL_OCEAN_SPACES_ACCESS_ID" \
-var "do_spaces_secret_key=$DIGITAL_OCEAN_SPACES_SECRET_KEY"
You will get a prompt to approve the change. You have to type yes to continue. At the end of this command, we will have a virtual machine, an S3 like bucket, an alert/monitor all assigned to a new project.
ℹ️
If you want to destroy the infrastructure and start again just use the following command
terraform destroy -var "alert_email=$ALERT_EMAIL" -var "do_token=$DIGITAL_OCEAN_TOKEN" -var "ssh_key_id=$DIGITAL_OCEAN_SSH_KEY_ID" -var "do_spaces_access_id=$DIGITAL_OCEAN_SPACES_ACCESS_ID" -var "do_spaces_secret_key=$DIGITAL_OCEAN_SPACES_SECRET_KEY"
How to install Nextcloud?
Now that our infrastructure is setup, let's create the relevant Ansible files. Run the following command to create the nextcloud-role .
ansible-galaxy init nextcloud-role
After the command is done, create the following files manually: playbook.yaml in the root of the project, docker-compose.yaml.j2 in the templates folder under the nextcloud-role, and requirements.yaml in the root again.
.
├── nextcloud-role
│ ├── defaults
│ │ └── main.yml
│ ├── files
│ ├── handlers
│ │ └── main.yml
│ ├── meta
│ │ └── main.yml
│ ├── README.md
│ ├── tasks
│ │ └── main.yml
│ ├── templates
│ │ └── docker-compose.yaml.j2
│ ├── tests
│ │ ├── inventory
│ │ └── test.yml
│ └── vars
│ └── main.yml
├── nextcloud.tf
├── playbook.yaml
├── requirements.yaml
Your project folder should look like the above. Now copy/paste the following content into requirements.yaml and then run the command to install dependencies.
---
roles:
- name: geerlingguy.certbot
version: 5.0.0
- name: geerlingguy.docker
version: 4.1.3
- name: geerlingguy.nginx
version: 3.1.0
ansible-galaxy install requirements.yaml
After the dependencies have been installed, it's time to create the main task. Open up the tasks/main.yml file and paste the following inside.
---
- name: Create docker-compose file
template:
src: docker-compose.yaml.j2
dest: /root/docker-compose.yaml
- name: Deploy Nextcloud stack
command: docker-compose up -d
- name: Run Nextcloud cron every 5 minutes
ansible.builtin.cron:
name: "Nexcloud cron"
minute: "*/5"
job: "docker exec -u www-data root_app_1 php cron.php"
All we do with the above tasks is copy the Docker compose file to the remote server, start the Nextcloud instance using docker-compose, and create a CORN job that runs Nextcloud's background jobs.
Next, it's time to create the docker-compose.yaml file. We're going to use the Jinja2 templating format so the full file name will be docker-compose.yaml.j2. The following are the contents of the file.
version: '3'
volumes:
nextcloud:
db:
services:
db:
image: mariadb:10.7.3
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD="{{ lookup('env', 'NEXTCLOUD_MYSQL_ROOT_PASSWORD') }}"
- MYSQL_PASSWORD="{{ lookup('env', 'NEXTCLOUD_MYSQL_PASSWORD') }}"
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud:22.2.6
restart: always
ports:
- 8080:80
links:
- db
volumes:
- nextcloud:/var/www/html
environment:
- MYSQL_PASSWORD="{{ lookup('env', 'NEXTCLOUD_MYSQL_PASSWORD') }}"
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=db
In the compose file we specify the database to use, and the version Nextcloud to be used. At the time of writing, 22.2.6 is the latest production ready version. You can update this version number with whatever the latest version is. We also specify here the various database, and user information. I'm using environment variables to specify the sensitive values. Take note that the port Nextcloud is running on is 8080. Don't worry, we will be proxying and securing the Nextcloud instance with Nginx and Certbot.
Lastly, it's time to create a playbook.yaml in the root of the project. Copy/paste the following into the file.
---
- hosts: all
vars:
certbot_admin_email: "{{ lookup('env', 'ALERT_EMAIL') }}"
certbot_create_if_missing: true
certbot_create_standalone_stop_services: []
certbot_certs:
- domains:
- [DOMAIN]
nginx_upstreams:
- name: nextcloud
strategy: "ip_hash" # "least_conn", etc.
servers:
- "localhost:8080"
nginx_vhosts:
- listen: "80"
server_name: "[DOMAIN]"
return: "301 https://[DOMAIN]$request_uri"
filename: "[DOMAIN].80.conf"
- listen: "443 ssl http2"
server_name: "[DOMAIN]"
filename: "[DOMAIN].443.conf"
extra_parameters: |
location / {
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://nextcloud;
}
ssl_certificate /etc/letsencrypt/live/[DOMAIN]/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/[DOMAIN]/privkey.pem;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
roles:
- role: geerlingguy.certbot
- role: geerlingguy.docker
- role: geerlingguy.nginx
- role: nextcloud-role
In the playbook file we specify which Ansible roles we want to run and which variables they should use. At the top we specify some Certbot variables. Make sure to set your email for Certbot SSL alerts. Then we set our Nginx upstream, which is our Nextcloud instance, and a couple of vhosts for Nginx. Notice the upstream is using the Nextcloud port 8080. The SSL configuration is pointing to the standard Certbot installation location. Make sure to change all the occurrences of [DOMAIN] with your Nextcloud domain.
Now let's run our playbook.
ansible-playbook -u root -i "[SERVER IP]," playbook.yaml
Change [SERVER IP] to the IP of the VM that was created using Terraform. Make sure to include the comma in the parameter, otherwise it will throw an error.
If all goes well you should have your very own working installation of Nextcloud. Which you can customize and explore to your heart's content.
All the files mentioned in this can be found at Github link below.
Top comments (0)