I was following the NOTS tutorial for setting up NextCloud 15 on Ubuntu 18.04 server, but I was trying to do this on my own server as opposed to a VPS like Jacob did with Digital Ocean.
I came across a problem when trying to get Let's Encrypt for my server because I only had the private IP address of my PC, which I have now learned one cannot get SSL/TLS certificates for (I'm fairly new to the server world).
I'd like to set up my own server at a known location to run NextCloud, and then access it over the internet from other locations. Is this possible? I noticed from this page on Datamation that they only include the VPS, or LAN as options for NextCloud: https://www.datamation.com/open-source/how-to-setup-your-own-cloud-server-with-nextcloud.html. That made me think it had to either be at a home network only, or through a VPS.
Is it possible to set up my own web server for which I can then use Let's Encrypt, and have NextCloud running at my known physical location and accessible over the internet? I'm still researching this online, but have been coming up short.
For more detail, the error I received was right after editing the 000-default.conf file to include the server name. Running "certbot –apache", while asking if I'd like to activate HTTPS for the name I used, wound up resulting in an failure after "Cleaning up challenges" stating "The server could not resolve a domain name :: No valid IP addresses found for mydomain.com".
" which outlines additional software needs that I believe weren't in the NOTS tutorial as a VPS was being used. Examples are installation of LAMP, APCu, and Redis. I'll try going through this guide when I get time.
Some other reading I've done recommends not hosting one's own server as there are a number of disadvantages, but I'm doing this more as a learning exercise than anything, so I'll continue on that front.
Let me know if you have any recommendations or if there are other things I should be considering.
Looking back at the video, the main discrepancy from my approach is that Jacob does SSH into the root at the IP address provided for Digital Ocean's VPS. That's effectively where I deviate. I guess at this point, my question is:
"Is it possible to obtain and SSH into a public IP address on a web server run from my computer at home with no additional hardware, such that I can use Let's Encrypt to get HTTPS when accessing the server?"
Alternatively, if that isn't possible:
"Is there another simple means of encrypting the private IP address of my PC to get HTTPS when accessing the server over the internet?"
I'm sorry it took me so long to get back to you, I started a new full-time network technician job yesterday (Monday), so I've been very busy. I'll read through your posts now and respond to any questions I see.
1123artc said
Is it possible to set up my own web server for which I can then use Let's Encrypt, and have NextCloud running at my known physical location and accessible over the internet? I'm still researching this online, but have been coming up short.
It's entirely possible to host your own Nextcloud server at home, I did this myself for years. I would actually recommend home-hosting over a VPS because it gives you more control.
1123artc said
For more detail, the error I received was right after editing the 000-default.conf file to include the server name. Running "certbot –apache", while asking if I'd like to activate HTTPS for the name I used, wound up resulting in an failure after "Cleaning up challenges" stating "The server could not resolve a domain name :: No valid IP addresses found for mydomain.com".
To be clear, your PC (or, rather, your home internet connection) does have a public IP address. You can find it by going to DuckDuckGo and searching "what is my ip", then look at the info it returns at the top or use any of the websites in the results. Your home router performs Network Address Translation (NAT) to translate that single public IP into a separate private one for each of your devices.
1123artc said
I came across this guide: <snip> which outlines additional software needs that I believe weren't in the NOTS tutorial as a VPS was being used. Examples are installation of LAMP, APCu, and Redis.
LAMP stands for Linux, Apache, MySQL/MariaDB, and PHP. All four of those things were included in my tutorial. APCu and Redis are memory caching tools, which were also covered in the tutorial.
1123artc said
At this point, I figure it must be possible as people host their own sites all the time….
Some other reading I've done recommends not hosting one's own server as there are a number of disadvantages, but I'm doing this more as a learning exercise than anything, so I'll continue on that front.
That's bad advice in my opinion. A home-hosted server gives you complete physical control over your data. The only downsides are that your internet connection may not be very fast and your public IP address is liable to change.
1123artc said
Looking back at the video, the main discrepancy from my approach is that Jacob does SSH into the root at the IP address provided for Digital Ocean's VPS. That's effectively where I deviate. I guess at this point, my question is:
"Is it possible to obtain and SSH into a public IP address on a web server run from my computer at home with no additional hardware, such that I can use Let's Encrypt to get HTTPS when accessing the server?"
Alternatively, if that isn't possible:
"Is there another simple means of encrypting the private IP address of my PC to get HTTPS when accessing the server over the internet?"
Like I said, you have a public IP address. You need to find out what that is (very easy to do, just search for it and use any of the tools listed.) Personally, I would register yourself a domain name to make things easier, and set an A record pointing the domain name to your home's IP address; that way, if/when your home's IP address changes (because ISPs don't usually guarantee static IPs for residential connections), you can log into your domain registrar and update the IP address while keeping the domain name the same. Alternatively, if your IP changes too often, there are a number of dynamic DNS resolvers that will basically check what your public IP is periodically (from one of your local machines) and update your domain name for you.
1123artc said
Following the NOTS video on "Setting up a home web server", I was made aware of port forwarding. Researching more into this now.
Once you've figured out your public IP address, you will need to go into your router and set a port forward for ports 80 and 443 pointing to the private IP address of your web server. That's how your router knows which device to forward unsolicited requests for those ports to. I can't give specific details on that because it's different for every router, but basically, you should find a settings screen where you can select the outside port (80 and 443, both TCP and UDP for simplicity although HTTP/HTTPS use TCP) and the private IP address (what your web server thinks its IP address is.)
I know that's a lot of info, and I also just had to re-type a lot of it because I lost the post I was working on (I guess I need to do some UX research on my website, because right now clicking "whisper" on a post completely trashes anything in the quick reply field.) Let me know if you understand all of that or if you have further questions about any of it. When I used to home-host my Nextcloud server, I had cloud.nerdonthestreet.com resolving to my home's public IP address, and then my router had port forwarding for 80 and 443 sending requests to my web server, which was usually running Debian.
Just wanted to add one more thing, it looks like you will need to acquire a domain name to use one way or another, because Let's Encrypt does its verification based on your domain name (basically, "can you prove that you control this domain name? If so, we will issue you a certificate for it.")
You could do that by purchasing one from a registrar like Hover (you can save $2 if you use hover.nots.co), or another registrar like GoDaddy (which will try to upsell you on anything and everything) or Network Solutions or anything else. If you don't want to shell out the ~$13/year, you could try doing it with a free dynamic DNS resolver like noip.com, or using a free domain registrar like dot.tk (I used .tk for Nerd on the Street in the very beginning, but that was before Let's Encrypt existed, so I don't know how they will work together.)
My personal recommendation is to just get a domain name, because it will make things easier for you unless your ISP changes your public IP address ridiculously often, in which case something like NoIP may be necessary.
No worries, and congrats on the new job! Thanks for the confirmation that I can host this on my own as the first thing I wanted to make sure was that I wasn’t on a wild goose chase. I’ve read through your comments and I definitely have a better understanding of what steps to take now.
I should have clarified regarding the error message I got when setting up Let’s Encrypt. I have purchased my own domain name, and I used it appropriately when editing the 000-default.conf file. I just used the “mydomain.com” in my post as a replacement within the error message.
Good to know you hosted your own server for years, and that it isn’t necessarily a impractical idea. I’ve also heard the mantra that “if you can’t fix it, you don’t own it” and figured I’d prefer to set it up myself.
I was under the impression I wouldn’t be able to use the public router IP address, so that’s good to know for sure that I can. As I said I’ve got my domain name, and following your tutorial, I learned how to point it to an IP address. I’ll have to do some tests to see if I can get my router IP address to remain static, but if not I’ll look more into dynamic DNS resolvers.
I was able to log into my router last night and figure out how things work. I’ll have to spend some more time on port forwarding to set it up correctly.
I was under the impression I wouldn’t be able to use the public router IP address, so that’s good to know for sure that I can. As I said I’ve got my domain name, and following your tutorial, I learned how to point it to an IP address. I’ll have to do some tests to see if I can get my router IP address to remain static, but if not I’ll look more into dynamic DNS resolvers.
Some ISPs don't really like people hosting things, but Charter/Spectrum and AT&T were both fine with it when I was doing it. The best thing to do is to just do it and see if you hear from them. I used to hear that IP addresses changed often for residential connections, but Charter/Spectrum only changed mine once or twice in the 4 years that I was hosting NOTS servers at that home. Your best bet to avoid having your public IP change is to avoid rebooting your modem and avoid releasing/renewing your IP on your router (so, basically, the exact opposite behavior of if you wanted to intentionally get your IP changed.) I used NoIP briefly, but I don't remember much about it, as that was before I purchased nerdonthestreet.com (before it was available for purchase, actually) and I was only using it for a Minecraft server.
1123artc said
I was able to log into my router last night and figure out how things work. I’ll have to spend some more time on port forwarding to set it up correctly.
Yep, TCP Port 80 is used for HTTP and TCP Port 443 is used for HTTPS. You can find lists of common ports online, in case you ever want to host any other services (although some ISPs block things like email ports; Charter/Spectrum only allowed outgoing home-hosted email through their designated gateway, ostensibly to prevent spam.) Like I said, it just needs to point to the private/internal IP of your web server. Your router actually tracks every outgoing connection you make in order to route responses to the correct devices, but when it receives unsolicited traffic, it doesn't know which internal device to route it to until you set up port forwarding.
What's going on is actually called Port Address Translation (PAT), also called NAT Overload (older versions of NAT were designed to translate many internal IPs to X number of external IPs, where only X number of people can have connections open at once; PAT is generally used when you only have a single external IP, as is the case with home connections, and allows thousands of simultaneous connections from the same IP.) When you connect to nerdonthestreet.com, your router sends a packet with a random source port and a destination port of 80, and NOTS replies with source port 80 and the destination port set to whatever your random source port was. Port forwarding kind of sidesteps that process; when you connect to your own domain name from outside, you're using a destination port of 80, and when your web server responds, it's using a source port of 80, which is the only time you'll actually see outgoing traffic from your home connection on port 80… generally, only one device on your network can be bound to port 80 at a time, hence the port forwarding to a single private IP so your router knows where port 80 is served from. (This example was for HTTP, all the 80's are 443's for HTTPS.)
That last paragraph wasn't really necessary, but I'm glad that I typed it, because I remembered that you might not be able to connect to your domain name from inside your network! If your domain name is set to your public IP and you can ping it from outside your network but not from inside, the easiest solution is to edit your hosts file and tell your computer to resolve your domain name to the private IP address instead. (I assume your ISP and/or your router gets confused when it tries to send a NAT-ed packet from your public IP to your public IP; setting the hosts file avoids that by telling your computer to go directly to your web server without touching the internet.) Just do a web search for "Linux hosts file," or "Windows hosts file" if you're using Windows.
Sorry for the rant in the middle there, but I'm glad to help. Hope you get everything up and running how you want it.
Thanks for all the help - I've had it up and running well for a couple days. No problem with the public IP address changing but I'll keep an eye on it. I've seen a lot of talk about DuckDNS as well for dynamic DNS, which seems to be growing in popularity compared to NoIP.
I just wound up forwarding both 80 and 443, but I also used the option with Let's Encrypted to redirect to HTTPS so I guess it should all go through 443. Thanks for noting that about connecting to my domain from within my network. I haven't had any problems yet, but I also use a VPN on my other devices so maybe that has some impact.
For others who happen to come accros this, my setup is with Ubuntu Server 18.04, manually installed LAMP, a domain pointing to my router external IP address with Let's Encrypt redirected to always be HTTPS, and port forwarding in my router with both port 80 and port 443 pointing to my web server private IP address. Essentially followed the NOTS tutorial but hosting from a physical server at my home and port forwarding.
Hi Jacob, Can I chime in here and ask for some advice? I'm basically stuck on a similar issue. I have a domain name and have found my static IP that I get from my ISP. I have set my Ubuntu machine to a static internal IP. The problem is that I haven't opened ports 80 and 443 on my router and when I do I get an error that my router admin software uses port 80, so it won't allow me to port forward 80 to my Ubuntu machine's internal IP. So basically I haven't proceeded past about minute 20 of your Youtube video (which is great by the way)
Also, I'd like to describe my preferred final set-up before proceeding; I'd like to access my nextcloud instance at nextcloud.mydomain.com/ and have mydomain.com/index.xxx available for something else (maybe just a website that my daughter and I build together so we can learn stuff).
Following my guide I have installed nextcloud into the /var/www/html directory and I need to figure out now if it should be configured differently to be accessed from nextcloud.mydomain.com.
*** OK bit of an update. I figured out how to move my router off port 80 for its admin, and I set up port forarding for ports 80 and 443 and now I can see my nextcloud installation from going to mydomain.com. Like Jacob I ma using Hover as my registrar but I can't get anything to show up for making an A record for nextcloud.mydomain.com.
*** OK bit of an update. I figured out how to move my router off port 80 for its admin, and I set up port forarding for ports 80 and 443 and now I can see my nextcloud installation from going to mydomain.com.
Like Jacob I ma using Hover as my registrar but I can't get anything to show up for making an A record for nextcloud.mydomain.com.
Hi there! In Hover, go to your domain, DNS, Add A Record, and set Type to "A" and Hostname to, e.g., "nextcloud" if you want nextcloud.mydomain.com, with the IP address set to the same static IP you were already using.
Thanks Jacob, I'll have another try and making that happen but when I did it before and pinged next cloud.mydomain.com the IP that returned was the Hover IP for parked domains, but pinging mydomain.com returned my IP. I'll try again and check my steps. Also, I'll check out your suggested video but it sounds like I don't necessarily need to move my next cloud install out of /var/www/html?
I'll have another try and making that happen but when I did it before and pinged next cloud.mydomain.com the IP that returned was the Hover IP for parked domains, but pinging mydomain.com returned my IP. I'll try again and check my steps.
DNS changes can take hours to propagate, although Hover is usually pretty quick in my experience. If you have two records pointing to the same IP address in the Hover control panel, then they should return the same IP address when you ping them.
_HC1_dude said
Also, I'll check out your suggested video but it sounds like I don't necessarily need to move my next cloud install out of /var/www/html?
If you already have Nextcloud set up, you don't need to move it out of /var/www/html, but you'll want to create another folder in /var/www/<other> for whatever other website you're setting up. You'll just need to make sure the appropriate ServerName/ServerAlias and Directory options are set up in your virtual host configurations. For example:
Hi jacobgkau, greetings from Venezuela, I like all your example videos, I have a lot of affinity for the Linux operating system, however I wanted to know if you can help me create sub-domains in ubuntu server 18.04 TLS?
Now configure the nextcloud server integrated with collabora online, and I would like to take advantage of the resource to add other applications as sub-domains without damaging the initial installation. Thank you,
Hi jacobgkau, greetings from Venezuela, I like all your example videos, I have a lot of affinity for the Linux operating system, however I wanted to know if you can help me create sub-domains in ubuntu server 18.04 TLS?
Now configure the nextcloud server integrated with collabora online, and I would like to take advantage of the resource to add other applications as sub-domains without damaging the initial installation. Thank you,
You can configure additional subdomains using the same process that you used for Nextcloud and Collabera: create a directory, make a new configuration file in /etc/apache2/sites-available pointing to that directory, and enable the new site with the "a2ensite" command.
Hi Jacob, What linux distro are you using in the "How to install Nextcloud 15" video. It looks really cool. Also I'm having issues with uploading to nextcloud. the fastest it will upload a file at is around 35mb/s if I'm lucky. Is there a setting I need to adjust to make it upload faster. I have a windows server on the same hypervisor that I can upload the same file at 100mb/s so I think there is something wrong with my nextcloud or apache configurations. The installation of Nextcloud worked great! I just thought you might be able to help me fix this problem or something. Thanks,
Hi Jacob, What linux distro are you using in the "How to install Nextcloud 15" video. It looks really cool. Also I'm having issues with uploading to nextcloud. the fastest it will upload a file at is around 35mb/s if I'm lucky. Is there a setting I need to adjust to make it upload faster. I have a windows server on the same hypervisor that I can upload the same file at 100mb/s so I think there is something wrong with my nextcloud or apache configurations. The installation of Nextcloud worked great! I just thought you might be able to help me fix this problem or something. Thanks,
I was using Arch Linux with KDE Plasma in that video! As for upload speed, I would ask the Nextcloud folks about that. What protocol or application are you using on Windows Server for comparison? Are you able to achieve greater speeds with SFTP or FTP on the Linux server?
Hi, I just joined. I watched your videos on installing onlyoffice next to next cloud and also the earlier one on Collabora Online (CODE). I have a question that I hope you can help me with. I will make a contribution to your site.
I have word press site on a SiteGround host site and separately I have a Linode. On the Linode I have Mattermost and Nextcloud ( both using nginx) with using subdomain from the Word Press site domain name. I installed OnlyOffice Document Server (not docker container), I did not change the port before installing thinking I can redirect correctly in config files. However, I am not sure, should I have changed the port to something else like 81? If I should change the port what are the config files that I would have to edit. Also what should basically be in the /etc/nginx/sites-available/onlyoffice config file to direct the onlyoffice subdomain call to the DocumentServer?
I have word press site on a SiteGround host site and separately I have a Linode. On the Linode I have Mattermost and Nextcloud ( both using nginx) with using subdomain from the Word Press site domain name. I installed OnlyOffice Document Server (not docker container), I did not change the port before installing thinking I can redirect correctly in config files. However, I am not sure, should I have changed the port to something else like 81? If I should change the port what are the config files that I would have to edit. Also what should basically be in the /etc/nginx/sites-available/onlyoffice config file to direct the onlyoffice subdomain call to the DocumentServer?
Since you were already using NGINX (which should own ports 80 and 443 on your Linode), and OnlyOffice uses NGINX, you might be able to get away with running OnlyOffice on port 80. What does the NGINX config file for OnlyOffice look like right now? You should be able to set the OnlyOffice subdomain using the server_name directive in the NGINX config file. I don't have an OnlyOffice server running right now, so you'll need to show what the current config file looks like if you want any specific help.
Hello, thanks for the response. Since I posted this I have made considerable progress with my Linode VM. I have installed Nextcloud with both Only Office and CollaboraOnline. The whole process was affected with gotchhas and land mines, one had to be very careful. But it all works. Thanks for your videos as they were very helpful in getting everything clean-uped. I was able to install all application with Nginx and I did not use Docker at all. The installation for CollaboraOnline was not any worse than with the native binaries rather than Docker and I would recommend to do it that way because we can then discover in code where we can eliminate the use restrictions for CODE. Also I think it is nice to be in all Nginx to make the configuration more consistent across the board unless I am missing better functionality in Apache. I would appreciate your review of my config files as I am not sure that they are optimal. What I ended up with perhaps was a consequence of the order in which I installed things and the order in which I found fixes or workarounds. The config files are attached and below is a summary of what I did. On a separate hosting service I have a WordPress site with the domain name, url of form www.[DomainName].org. On a Linode VM I installed in order ( all using nginx): Mattermost: mattermost.[DomainName].org php7.3-fpm of course Nextcloud: nextcloud.[DomainName].org OnlyOffice(Document Server): onlyoffice.[DomainName].org CollaboraOnline(CODE): colloboraonline.[DomainName].org PhpMyAdmin: phpmyadmin.[DomainName].org The config files are (attached) that I remember editing. /etc/nginx/nginx.conf /etc/nginx/sites-available/nextcloud /etc/onlyoffice/documentserver/nginx/ds.conf /etc/nginx/sites-available/onlyoffice /etc/nginx/sites-available/collaboraonline /etc/nginx/sites-available/phpmyadmin I also attached /etc/nginx/sites-available/default but I never used it (not symbolically linked to sites-enabled). All the files other than default were originally symbolically linked to sites-enabled. However, I found that the mattermost was always preempting onlyoffice and collaboraonline, so I moved those latter 2 into the mattermost config file as shown to get things to work. I also removed the symbolic links for the latter 2. Mattermost, OnlyOffice and CollaboraOnline configuration worked differently perhaps because they mapped to services listening on designated ports whereas Nextcloud and PhpMyAdmin link to web sites under /var/www. Anyway I would appreciate your review of these files, can they be optimized and do they make sense. The next thing I would like to tacker is creating a TURN server so that Nextcloud Talk video and audio work across all proxies and firewalls. Do you know of a good video or tutorial on how to do this? I didn’t see any place to attach files so I will put the text in the next reply.
/etc/nginx/nginx.conf
/etc/nginx/sites-available/mattermost
/etc/nginx/sites-available/nextcloud
/etc/onlyoffice/documentserver/nginx/ds.conf
/etc/nginx/sites-available/onlyoffice
/etc/nginx/sites-available/collaboraonline /etc/nginx/sites-available/phpmyadmin I also attached /etc/nginx/sites-available/default but I never used it (not symbolically linked to sites-enabled).
etc/nginx/nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
# The following 2 rules are only needed for the user_webfinger a1pp.
# Uncomment it if you're planning to use this app.
#rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
#rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;
# Adding the cache control header for js and css files
# Make sure it is BELOW the location ~ \.php(?:$|/) { block
location ~* \.(?:css|js)$ {
add_header Cache-Control "public, max-age=7200";
# Add headers to serve security related headers
add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Robots-Tag none;
add_header X-Download-Options noopen;
add_header X-Permitted-Cross-Domain-Policies none;
# Optional: Don't log access to assets
access_log off;
}
# Optional: Don't log access to other assets
location ~* \.(?:jpg|jpeg|gif|bmp|ico|png|swf)$ {
access_log off;
}
}
/etc/onlyoffice/documentserver/nginx/ds.conf:
include /etc/nginx/includes/http-common.conf;
## Normal HTTP host
server {
listen 0.0.0.0:81;
listen [::]:81 default_server;
server_name _;
server_tokens off;
## Redirects all traffic to the HTTPS host
root /nowhere; ## root doesn't have to be a valid path since we are redirecting
rewrite ^ https://$host$request_uri? permanent;
}
#HTTP host for internal services
server {
listen 127.0.0.1:81;
listen [::1]:81;
server_name localhost;
server_tokens off;
include /etc/nginx/includes/ds-common.conf;
include /etc/nginx/includes/ds-docservice.conf;
}
/etc/nginx/sites-available/default:
##
# You should look at the following URL's in order to grasp a solid understanding
# of Nginx configuration files in order to fully unleash the power of Nginx.
# https://www.nginx.com/resources/wiki/start/
# https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/
# https://wiki.debian.org/Nginx/DirectoryStructure
#
# In most cases, administrators will remove this file from sites-enabled/ and
# leave it as reference inside of sites-available where it will continue to be
# updated by the nginx packaging team.
#
# This file will automatically load configuration files provided by other
# applications, such as Drupal or Wordpress. These applications will be made
# available underneath a path with that package name, such as /drupal8.
#
# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples.
##
# Default server configuration
#
server {
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
# pass PHP scripts to FastCGI server
#
#location ~ \.php$ {
# include snippets/fastcgi-php.conf;
#
# # With php-fpm (or other unix sockets):
# fastcgi_pass unix:/run/php/php7.3-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# Virtual Host configuration for example.com
#
# You can move that to a different file under sites-available/ and symlink that
# to sites-enabled/ to enable it.
#
#server {
# listen 80;
# listen [::]:80;
#
# server_name example.com;
#
# root /var/www/example.com;
# index index.html;
#
# location / {
# try_files $uri $uri/ =404;
# }
#}
Thank you so much for joining the Nerd Club! I appreciate it a lot. I'm glad my videos were helpful to you.
Looking through your NGINX files, I don't see anything jumping out at me as being "wrong" or "not optimal" (keeping in mind that I look at Apache configuration more often than NGINX configuration.) The most important thing for a configuration file to do is to set the configuration how you need it, so if this configuration's working, it sounds like you've done everything right.
I haven't created a TURN server before, and I don't have much experience with Nextcloud Talk yet– I've primarily used RocketChat combined with Jitsi for chatting over the last couple of years. However, NOTS does have an internal Nextcloud server again as of a few weeks ago, and as great as Jitsi is, it does have quality issues from time to time, so I'll probably be checking Nextcloud Talk out sooner or later.
posted
I came across a problem when trying to get Let's Encrypt for my server because I only had the private IP address of my PC, which I have now learned one cannot get SSL/TLS certificates for (I'm fairly new to the server world).
I'd like to set up my own server at a known location to run NextCloud, and then access it over the internet from other locations. Is this possible? I noticed from this page on Datamation that they only include the VPS, or LAN as options for NextCloud: https://www.datamation.com/open-source/how-to-setup-your-own-cloud-server-with-nextcloud.html. That made me think it had to either be at a home network only, or through a VPS.
Is it possible to set up my own web server for which I can then use Let's Encrypt, and have NextCloud running at my known physical location and accessible over the internet? I'm still researching this online, but have been coming up short.
Thanks in advance.
posted
This led me to the following: https://community.letsencrypt.org/t/no-valid-ip-addresses-found-for-my-domain/57417
Which I then figured the issue was because I was using my private IP address instead of a public one from a web server.
posted
At this point, I figure it must be possible as people host their own sites all the time.
I came across this guide: "
Installing Nextcloud on Ubuntu with Redis, APCu, SSL & Apache
Installing Nextcloud on Ubuntu with Redis, APCu, SSL & ApacheView
Some other reading I've done recommends not hosting one's own server as there are a number of disadvantages, but I'm doing this more as a learning exercise than anything, so I'll continue on that front.
Let me know if you have any recommendations or if there are other things I should be considering.
posted
"Is it possible to obtain and SSH into a public IP address on a web server run from my computer at home with no additional hardware, such that I can use Let's Encrypt to get HTTPS when accessing the server?"
Alternatively, if that isn't possible:
"Is there another simple means of encrypting the private IP address of my PC to get HTTPS when accessing the server over the internet?"
Thanks again in advance.
posted
Following the NOTS video on "Setting up a home web server", I was made aware of port forwarding. Researching more into this now.
Any more info is still helpful.
posted
It's entirely possible to host your own Nextcloud server at home, I did this myself for years. I would actually recommend home-hosting over a VPS because it gives you more control.
If that's a literal copy-paste error message, the issue is that "mydomain.com" is obviously an example domain name and not one that you control.
To be clear, your PC (or, rather, your home internet connection) does have a public IP address. You can find it by going to DuckDuckGo and searching "what is my ip", then look at the info it returns at the top or use any of the websites in the results. Your home router performs Network Address Translation (NAT) to translate that single public IP into a separate private one for each of your devices.
LAMP stands for Linux, Apache, MySQL/MariaDB, and PHP. All four of those things were included in my tutorial. APCu and Redis are memory caching tools, which were also covered in the tutorial.
That's bad advice in my opinion. A home-hosted server gives you complete physical control over your data. The only downsides are that your internet connection may not be very fast and your public IP address is liable to change.
Like I said, you have a public IP address. You need to find out what that is (very easy to do, just search for it and use any of the tools listed.) Personally, I would register yourself a domain name to make things easier, and set an A record pointing the domain name to your home's IP address; that way, if/when your home's IP address changes (because ISPs don't usually guarantee static IPs for residential connections), you can log into your domain registrar and update the IP address while keeping the domain name the same. Alternatively, if your IP changes too often, there are a number of dynamic DNS resolvers that will basically check what your public IP is periodically (from one of your local machines) and update your domain name for you.
Once you've figured out your public IP address, you will need to go into your router and set a port forward for ports 80 and 443 pointing to the private IP address of your web server. That's how your router knows which device to forward unsolicited requests for those ports to. I can't give specific details on that because it's different for every router, but basically, you should find a settings screen where you can select the outside port (80 and 443, both TCP and UDP for simplicity although HTTP/HTTPS use TCP) and the private IP address (what your web server thinks its IP address is.)
I know that's a lot of info, and I also just had to re-type a lot of it because I lost the post I was working on (I guess I need to do some UX research on my website, because right now clicking "whisper" on a post completely trashes anything in the quick reply field.) Let me know if you understand all of that or if you have further questions about any of it. When I used to home-host my Nextcloud server, I had cloud.nerdonthestreet.com resolving to my home's public IP address, and then my router had port forwarding for 80 and 443 sending requests to my web server, which was usually running Debian.
posted
You could do that by purchasing one from a registrar like Hover (you can save $2 if you use hover.nots.co), or another registrar like GoDaddy (which will try to upsell you on anything and everything) or Network Solutions or anything else. If you don't want to shell out the ~$13/year, you could try doing it with a free dynamic DNS resolver like noip.com, or using a free domain registrar like dot.tk (I used .tk for Nerd on the Street in the very beginning, but that was before Let's Encrypt existed, so I don't know how they will work together.)
My personal recommendation is to just get a domain name, because it will make things easier for you unless your ISP changes your public IP address ridiculously often, in which case something like NoIP may be necessary.
posted
No worries, and congrats on the new job! Thanks for the confirmation that I can host this on my own as the first thing I wanted to make sure was that I wasn’t on a wild goose chase. I’ve read through your comments and I definitely have a better understanding of what steps to take now.
I should have clarified regarding the error message I got when setting up Let’s Encrypt. I have purchased my own domain name, and I used it appropriately when editing the 000-default.conf file. I just used the “mydomain.com” in my post as a replacement within the error message.
Good to know you hosted your own server for years, and that it isn’t necessarily a impractical idea. I’ve also heard the mantra that “if you can’t fix it, you don’t own it” and figured I’d prefer to set it up myself.
I was under the impression I wouldn’t be able to use the public router IP address, so that’s good to know for sure that I can. As I said I’ve got my domain name, and following your tutorial, I learned how to point it to an IP address. I’ll have to do some tests to see if I can get my router IP address to remain static, but if not I’ll look more into dynamic DNS resolvers.
I was able to log into my router last night and figure out how things work. I’ll have to spend some more time on port forwarding to set it up correctly.
Thanks again – your help is much appreciated!
posted
Some ISPs don't really like people hosting things, but Charter/Spectrum and AT&T were both fine with it when I was doing it. The best thing to do is to just do it and see if you hear from them. I used to hear that IP addresses changed often for residential connections, but Charter/Spectrum only changed mine once or twice in the 4 years that I was hosting NOTS servers at that home. Your best bet to avoid having your public IP change is to avoid rebooting your modem and avoid releasing/renewing your IP on your router (so, basically, the exact opposite behavior of if you wanted to intentionally get your IP changed.) I used NoIP briefly, but I don't remember much about it, as that was before I purchased nerdonthestreet.com (before it was available for purchase, actually) and I was only using it for a Minecraft server.
Yep, TCP Port 80 is used for HTTP and TCP Port 443 is used for HTTPS. You can find lists of common ports online, in case you ever want to host any other services (although some ISPs block things like email ports; Charter/Spectrum only allowed outgoing home-hosted email through their designated gateway, ostensibly to prevent spam.) Like I said, it just needs to point to the private/internal IP of your web server. Your router actually tracks every outgoing connection you make in order to route responses to the correct devices, but when it receives unsolicited traffic, it doesn't know which internal device to route it to until you set up port forwarding.
What's going on is actually called Port Address Translation (PAT), also called NAT Overload (older versions of NAT were designed to translate many internal IPs to X number of external IPs, where only X number of people can have connections open at once; PAT is generally used when you only have a single external IP, as is the case with home connections, and allows thousands of simultaneous connections from the same IP.) When you connect to nerdonthestreet.com, your router sends a packet with a random source port and a destination port of 80, and NOTS replies with source port 80 and the destination port set to whatever your random source port was. Port forwarding kind of sidesteps that process; when you connect to your own domain name from outside, you're using a destination port of 80, and when your web server responds, it's using a source port of 80, which is the only time you'll actually see outgoing traffic from your home connection on port 80… generally, only one device on your network can be bound to port 80 at a time, hence the port forwarding to a single private IP so your router knows where port 80 is served from. (This example was for HTTP, all the 80's are 443's for HTTPS.)
That last paragraph wasn't really necessary, but I'm glad that I typed it, because I remembered that you might not be able to connect to your domain name from inside your network! If your domain name is set to your public IP and you can ping it from outside your network but not from inside, the easiest solution is to edit your hosts file and tell your computer to resolve your domain name to the private IP address instead. (I assume your ISP and/or your router gets confused when it tries to send a NAT-ed packet from your public IP to your public IP; setting the hosts file avoids that by telling your computer to go directly to your web server without touching the internet.) Just do a web search for "Linux hosts file," or "Windows hosts file" if you're using Windows.
Sorry for the rant in the middle there, but I'm glad to help. Hope you get everything up and running how you want it.
posted
I just wound up forwarding both 80 and 443, but I also used the option with Let's Encrypted to redirect to HTTPS so I guess it should all go through 443. Thanks for noting that about connecting to my domain from within my network. I haven't had any problems yet, but I also use a VPN on my other devices so maybe that has some impact.
For others who happen to come accros this, my setup is with Ubuntu Server 18.04, manually installed LAMP, a domain pointing to my router external IP address with Let's Encrypt redirected to always be HTTPS, and port forwarding in my router with both port 80 and port 443 pointing to my web server private IP address. Essentially followed the NOTS tutorial but hosting from a physical server at my home and port forwarding.
Again, thanks for the help!
posted
Can I chime in here and ask for some advice?
I'm basically stuck on a similar issue. I have a domain name and have found my static IP that I get from my ISP. I have set my Ubuntu machine to a static internal IP.
The problem is that I haven't opened ports 80 and 443 on my router and when I do I get an error that my router admin software uses port 80, so it won't allow me to port forward 80 to my Ubuntu machine's internal IP.
So basically I haven't proceeded past about minute 20 of your Youtube video (which is great by the way)
Also, I'd like to describe my preferred final set-up before proceeding;
I'd like to access my nextcloud instance at nextcloud.mydomain.com/ and have mydomain.com/index.xxx available for something else (maybe just a website that my daughter and I build together so we can learn stuff).
Following my guide I have installed nextcloud into the /var/www/html directory and I need to figure out now if it should be configured differently to be accessed from nextcloud.mydomain.com.
*** OK bit of an update. I figured out how to move my router off port 80 for its admin, and I set up port forarding for ports 80 and 443 and now I can see my nextcloud installation from going to mydomain.com.
Like Jacob I ma using Hover as my registrar but I can't get anything to show up for making an A record for nextcloud.mydomain.com.
Hope you can help
posted
For configuration of the subdomain on the web server side, take a look at the setup for my Wordpress video: https://nerdonthestreet.com/episode/tech/installing-wordpress-on It's called "virtual hosts" in Apache.
posted
I'll have another try and making that happen but when I did it before and pinged next cloud.mydomain.com the IP that returned was the Hover IP for parked domains, but pinging mydomain.com returned my IP. I'll try again and check my steps.
Also, I'll check out your suggested video but it sounds like I don't necessarily need to move my next cloud install out of /var/www/html?
_HC1_dude
posted
DNS changes can take hours to propagate, although Hover is usually pretty quick in my experience. If you have two records pointing to the same IP address in the Hover control panel, then they should return the same IP address when you ping them.
If you already have Nextcloud set up, you don't need to move it out of /var/www/html, but you'll want to create another folder in /var/www/<other> for whatever other website you're setting up. You'll just need to make sure the appropriate ServerName/ServerAlias and Directory options are set up in your virtual host configurations. For example:
— nextcloud.conf —
ServerName - cloud.yourdomain.com
Directory - /var/www/html
— www-default.conf —
ServerName - yourdomain.com
ServerAlias - www.yourdomain.com
Directory - /var/www/<other-CMS>
posted
Now configure the nextcloud server integrated with collabora online, and I would like to take advantage of the resource to add other applications as sub-domains without damaging the initial installation.
Thank you,
Fernando Hernandez
posted
Hi Fernando,
You can configure additional subdomains using the same process that you used for Nextcloud and Collabera: create a directory, make a new configuration file in /etc/apache2/sites-available pointing to that directory, and enable the new site with the "a2ensite" command.
posted
What linux distro are you using in the "How to install Nextcloud 15" video. It looks really cool. Also I'm having issues with uploading to nextcloud. the fastest it will upload a file at is around 35mb/s if I'm lucky. Is there a setting I need to adjust to make it upload faster. I have a windows server on the same hypervisor that I can upload the same file at 100mb/s so I think there is something wrong with my nextcloud or apache configurations. The installation of Nextcloud worked great! I just thought you might be able to help me fix this problem or something.
Thanks,
posted
Hello Fredericksun,
I was using Arch Linux with KDE Plasma in that video! As for upload speed, I would ask the Nextcloud folks about that. What protocol or application are you using on Windows Server for comparison? Are you able to achieve greater speeds with SFTP or FTP on the Linux server?
posted
I just joined. I watched your videos on installing onlyoffice next to next cloud and also the earlier one on Collabora Online (CODE).
I have a question that I hope you can help me with. I will make a contribution to your site.
I have word press site on a SiteGround host site and separately I have a Linode. On the Linode I have Mattermost and Nextcloud ( both using nginx) with using subdomain from the Word Press site domain name. I installed OnlyOffice Document Server (not docker container), I did not change the port before installing thinking I can redirect correctly in config files. However, I am not sure, should I have changed the port to something else like 81? If I should change the port what are the config files that I would have to edit. Also what should basically be in the /etc/nginx/sites-available/onlyoffice config file to direct the onlyoffice subdomain call to the DocumentServer?
Thanks,
Dennis
posted
Since you were already using NGINX (which should own ports 80 and 443 on your Linode), and OnlyOffice uses NGINX, you might be able to get away with running OnlyOffice on port 80. What does the NGINX config file for OnlyOffice look like right now? You should be able to set the OnlyOffice subdomain using the server_name directive in the NGINX config file. I don't have an OnlyOffice server running right now, so you'll need to show what the current config file looks like if you want any specific help.
posted
I was able to install all application with Nginx and I did not use Docker at all. The installation for CollaboraOnline was not any worse than with the native binaries rather than Docker and I would recommend to do it that way because we can then discover in code where we can eliminate the use restrictions for CODE. Also I think it is nice to be in all Nginx to make the configuration more consistent across the board unless I am missing better functionality in Apache.
I would appreciate your review of my config files as I am not sure that they are optimal. What I ended up with perhaps was a consequence of the order in which I installed things and the order in which I found fixes or workarounds.
The config files are attached and below is a summary of what I did.
On a separate hosting service I have a WordPress site with the domain name, url of form www.[DomainName].org. On a Linode VM I installed in order ( all using nginx):
Mattermost: mattermost.[DomainName].org
php7.3-fpm of course
Nextcloud: nextcloud.[DomainName].org
OnlyOffice(Document Server): onlyoffice.[DomainName].org
CollaboraOnline(CODE): colloboraonline.[DomainName].org
PhpMyAdmin: phpmyadmin.[DomainName].org
The config files are (attached) that I remember editing.
/etc/nginx/nginx.conf
/etc/nginx/sites-available/nextcloud
/etc/onlyoffice/documentserver/nginx/ds.conf
/etc/nginx/sites-available/onlyoffice
/etc/nginx/sites-available/collaboraonline
/etc/nginx/sites-available/phpmyadmin
I also attached /etc/nginx/sites-available/default but I never used it (not symbolically linked to sites-enabled).
All the files other than default were originally symbolically linked to sites-enabled. However, I found that the mattermost was always preempting onlyoffice and collaboraonline, so I moved those latter 2 into the mattermost config file as shown to get things to work. I also removed the symbolic links for the latter 2. Mattermost, OnlyOffice and CollaboraOnline configuration worked differently perhaps because they mapped to services listening on designated ports whereas Nextcloud and PhpMyAdmin link to web sites under /var/www.
Anyway I would appreciate your review of these files, can they be optimized and do they make sense.
The next thing I would like to tacker is creating a TURN server so that Nextcloud Talk video and audio work across all proxies and firewalls. Do you know of a good video or tutorial on how to do this?
I didn’t see any place to attach files so I will put the text in the next reply.
posted
/etc/nginx/nginx.conf
/etc/nginx/sites-available/mattermost
/etc/nginx/sites-available/nextcloud
/etc/onlyoffice/documentserver/nginx/ds.conf
/etc/nginx/sites-available/onlyoffice
/etc/nginx/sites-available/collaboraonline
/etc/nginx/sites-available/phpmyadmin
I also attached /etc/nginx/sites-available/default but I never used it (not symbolically linked to sites-enabled).
etc/nginx/nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
# upload_max_filesize 512;
client_max_body_size 512m;
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
etc/nginx/sites-available/mattermost:
upstream backend {
server [IpAddress]:8065;
keepalive 32;
}
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off;
#server {
# listen 80;
# listen [::]:80;
# server_name collaboraonline.[DomainName].org;
#location / {
#proxy_pass http://127.0.0.1:9980;
#}
# # return 301 https://$server_name$request_uri;
#}
server {
listen 443 ssl;# http2;
server_name collaboraonline.[DomainName].org;
ssl on;
ssl_certificate /etc/letsencrypt/live/collaboraonline.[DomainName].org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/collaboraonline.[DomainName].org/privkey.pem;
ssl_verify_client off;
ssl_session_timeout 1d;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH";
ssl_prefer_server_ciphers on;
ssl_session_cache builtin:1099 shared:SSL:50m;
# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
add_header Strict-Transport-Security max-age=15768000;
add_header X-Content-Type-Options nosniff;
# OCSP Stapling —
# fetch OCSP records from URL in ssl_certificate and cache them
ssl_stapling on;
ssl_stapling_verify on;
#static files
location ^~ /loleaflet {
proxy_pass http://127.0.0.1:9980;
proxy_set_header Host $http_host;
}
#location / {
# proxy_pass http://127.0.0.1:9980;
# proxy_set_header Host $http_host;
# }
# WOPI discovery URL
location ^~ /hosting/discovery {
proxy_pass http://127.0.0.1:9980;
proxy_set_header Host $http_host;
}
# Capabilities
location ^~ /hosting/capabilities {
proxy_pass http://localhost:9980;
proxy_set_header Host $http_host;
}
# Main websocket
location ~ ^/lool/(.*)/ws$ {
proxy_pass http://127.0.0.1:9980;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $http_host;
proxy_read_timeout 36000s;
}
# Download, presentation and image upload
location ~ ^/lool {
proxy_pass http://127.0.0.1:9980;
proxy_set_header Host $http_host;
}
# Admin Console websocket
location ^~ /lool/adminws {
proxy_pass http://127.0.0.1:9980;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $http_host;
proxy_read_timeout 36000s;
}
}
#server {
#listen 80;
#server_name onlyoffice.[DomainName].org;
#location / {
#proxy_pass http://127.0.0.1:81;
#}
#}
#server {
# listen 80;# default_server;
# server_name mattermost.[DomainName].org ;
# return 301 https://$server_name$request_uri;
#}
#server {
#listen 80;
#server_name onlyoffice.[DomainName].org;
#location / {
#proxy_pass http://127.0.0.1:81;
#}
#}
server {
listen 443 ssl http2;
server_name onlyoffice.[DomainName].org;
ssl on;
ssl_certificate /etc/letsencrypt/live/onlyoffice.[DomainName].org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/onlyoffice.[DomainName].org/privkey.pem;
ssl_verify_client off;
ssl_session_timeout 1d;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_prefer_server_ciphers on;
ssl_session_cache builtin:1099 shared:SSL:50m;
# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
add_header Strict-Transport-Security max-age=15768000;
add_header X-Content-Type-Options nosniff;
# OCSP Stapling —
# fetch OCSP records from URL in ssl_certificate and cache them
ssl_stapling on;
ssl_stapling_verify on;
location / {
client_max_body_size 50M;
proxy_set_header Connection "";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_buffers 256 16k;
proxy_buffer_size 16k;
proxy_read_timeout 600s;
#proxy_cache mattermost_cache;
proxy_cache_revalidate on;
proxy_cache_min_uses 2;
proxy_cache_use_stale timeout;
proxy_cache_lock on;
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:81;
}
}
server {
listen 80;# default_server;
server_name mattermost.[DomainName].org ;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name mattermost.[DomainName].org;
ssl on;
ssl_certificate /etc/letsencrypt/live/mattermost.[DomainName].org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mattermost.[DomainName].org/privkey.pem;
ssl_session_timeout 1d;
ssl_protocols TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:50m;
# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
add_header Strict-Transport-Security max-age=15768000;
# OCSP Stapling —
# fetch OCSP records from URL in ssl_certificate and cache them
ssl_stapling on;
ssl_stapling_verify on;
location ~ /api/v[0-9]+/(users/)?websocket$ {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
client_max_body_size 50M;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_buffers 256 16k;
proxy_buffer_size 16k;
client_body_timeout 60;
send_timeout 300;
lingering_timeout 5;
proxy_connect_timeout 90;
proxy_send_timeout 300;
proxy_read_timeout 90s;
proxy_pass http://backend;
}
location / {
client_max_body_size 50M;
proxy_set_header Connection "";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_buffers 256 16k;
proxy_buffer_size 16k;
proxy_read_timeout 600s;
proxy_cache mattermost_cache;
proxy_cache_revalidate on;
proxy_cache_min_uses 2;
proxy_cache_use_stale timeout;
proxy_cache_lock on;
proxy_http_version 1.1;
proxy_pass http://backend;
}
}
posted
/etc/nginx/sites-available/nextcloud:
upstream php-handler {
#server 127.0.0.1:9000;
server unix:/run/php/php7.3-fpm.sock;
}
server {
listen 80;
server_name nextcloud.[DomainName].org;
root /var/www/nextcloud;
index index.htm index.html;
location / {
try_files $uri $uri/ =404;
}
}
server {
listen 443 ssl;
server_name nextcloud.[DomainName].org;
ssl_certificate /etc/letsencrypt/live/nextcloud.[DomainName].org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/nextcloud.[DomainName].org/privkey.pem;
# Path to the root of your installation
root /var/www/nextcloud/;
# set max upload size
client_max_body_size 10G;
fastcgi_buffers 64 4K;
# Disable gzip to avoid the removal of the ETag header
gzip off;
# Uncomment if your server is build with the ngx_pagespeed module
# This module is currently not supported.8
#pagespeed off;
index index.php;
error_page 403 /core/templates/403.php;
error_page 404 /core/templates/404.php;
rewrite ^/.well-known/carddav /remote.php/dav/ permanent;
rewrite ^/.well-known/caldav /remote.php/dav/ permanent;
# The following 2 rules are only needed for the user_webfinger a1pp.
# Uncomment it if you're planning to use this app.
#rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
#rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location ~ ^/(build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
}
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location / {
rewrite ^/remote/(.*) /remote.php last;
rewrite ^(/core/doc/[^\/]+/)$ $1/index.html;
try_files $uri $uri/ =404;
}
location ~ \.php(?:$|/) {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param HTTPS on;
fastcgi_param modHeadersAvailable true; #Avoid sending the security headers twice
fastcgi_pass php-handler;
fastcgi_intercept_errors on;
}
# Adding the cache control header for js and css files
# Make sure it is BELOW the location ~ \.php(?:$|/) { block
location ~* \.(?:css|js)$ {
add_header Cache-Control "public, max-age=7200";
# Add headers to serve security related headers
add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Robots-Tag none;
add_header X-Download-Options noopen;
add_header X-Permitted-Cross-Domain-Policies none;
# Optional: Don't log access to assets
access_log off;
}
# Optional: Don't log access to other assets
location ~* \.(?:jpg|jpeg|gif|bmp|ico|png|swf)$ {
access_log off;
}
}
/etc/onlyoffice/documentserver/nginx/ds.conf:
include /etc/nginx/includes/http-common.conf;
## Normal HTTP host
server {
listen 0.0.0.0:81;
listen [::]:81 default_server;
server_name _;
server_tokens off;
## Redirects all traffic to the HTTPS host
root /nowhere; ## root doesn't have to be a valid path since we are redirecting
rewrite ^ https://$host$request_uri? permanent;
}
#HTTP host for internal services
server {
listen 127.0.0.1:81;
listen [::1]:81;
server_name localhost;
server_tokens off;
include /etc/nginx/includes/ds-common.conf;
include /etc/nginx/includes/ds-docservice.conf;
}
## HTTPS host
server {
listen 0.0.0.0:443 ssl;
listen [::]:443 ssl default_server;
server_tokens off;
root /usr/share/nginx/html;
## Strong SSL Security
## https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
ssl on;
ssl_certificate /etc/letsencrypt/live/onlyoffice.[DomainName].org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/onlyoffice.[DomainName].org/privkey.pem;
ssl_verify_client off;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_session_cache builtin:1000 shared:SSL:50m;
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security max-age=31536000;
# add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
## [Optional] If your certficate has OCSP, enable OCSP stapling to reduce the overhead and latency of running SSL.
## Replace with your ssl_trusted_certificate. For more info see:
## - https://medium.com/devops-programming/4445f4862461
## - https://www.ruby-forum.com/topic/4419319
## - https://www.digitalocean.com/community/tutorials/how-to-configure-ocsp-stapling-on-apache-and-nginx
# ssl_stapling on;
# ssl_stapling_verify on;
# ssl_trusted_certificate /etc/nginx/ssl/stapling.trusted.crt;
# resolver 208.67.222.222 208.67.222.220 valid=300s; # Can change to your DNS resolver if desired
# resolver_timeout 10s;
## [Optional] Generate a stronger DHE parameter:
## cd /etc/ssl/certs
## sudo openssl dhparam -out dhparam.pem 4096
##
# ssl_dhparam /etc/ssl/certs/dhparam.pem;
include /etc/nginx/includes/ds-*.conf;
}
/etc/nginx/sites-available/onlyoffice:
upstream backend2 {
server [IpAddress]:81;
keepalive 32;
}
#proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off;
server {
listen 80;
listen [::]:80;
server_name onlyoffice.[DomainName].org;
location / {
proxy_pass http://127.0.0.1:81;
}
# return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name onlyoffice.[DomainName].org;
ssl on;
ssl_certificate /etc/letsencrypt/live/onlyoffice.[DomainName].org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/onlyoffice.[DomainName].org/privkey.pem;
ssl_verify_client off;
ssl_session_timeout 1d;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_prefer_server_ciphers on;
ssl_session_cache builtin:1099 shared:SSL:50m;
# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
add_header Strict-Transport-Security max-age=15768000;
add_header X-Content-Type-Options nosniff;
# OCSP Stapling —
# fetch OCSP records from URL in ssl_certificate and cache them
ssl_stapling on;
ssl_stapling_verify on;
location / {
proxy_pass http://127.0.0.1:8443;
}
}
/etc/nginx/sites-available/collaboraonline:
upstream backend3 {
server [IpAddress]:9980;
keepalive 32;
}
#proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off;
#server {
# listen 80;
# listen [::]:80;
# server_name collaboraonline.[DomainName].org;
#location / {
#proxy_pass http://127.0.0.1:9980;
#}
# # return 301 https://$server_name$request_uri;
#}
server {
listen 443 ssl;# http2;
server_name collaboraonline.[DomainName].org;
ssl on;
ssl_certificate /etc/letsencrypt/live/collaboraonline.[DomainName].org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/collaboraonline.[DomainName].org/privkey.pem;
ssl_verify_client off;
ssl_session_timeout 1d;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH";
ssl_prefer_server_ciphers on;
ssl_session_cache builtin:1099 shared:SSL:50m;
# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
add_header Strict-Transport-Security max-age=15768000;
add_header X-Content-Type-Options nosniff;
# OCSP Stapling —
# fetch OCSP records from URL in ssl_certificate and cache them
ssl_stapling on;
ssl_stapling_verify on;
#static files
location ^~ /loleaflet {
proxy_pass http://127.0.0.1:9980;
proxy_set_header Host $http_host;
}
# WOPI discovery URL
location ^~ /hosting/discovery {
proxy_pass http://127.0.0.1:9980;
proxy_set_header Host $http_host;
}
# Capabilities
location ^~ /hosting/capabilities {
proxy_pass http://localhost:9980;
proxy_set_header Host $http_host;
}
# Main websocket
location ~ ^/lool/(.*)/ws$ {
proxy_pass http://127.0.0.1:9980;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $http_host;
proxy_read_timeout 36000s;
}
# Download, presentation and image upload
location ~ ^/lool {
proxy_pass http://127.0.0.1:9980;
proxy_set_header Host $http_host;
}
# Admin Console websocket
location ^~ /lool/adminws {
proxy_pass http://127.0.0.1:9980;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $http_host;
proxy_read_timeout 36000s;
}
}
/etc/nginx/sites-available/phpmyadmin:
server {
listen 443;
ssl on;dodo
ssl_certificate /etc/letsencrypt/live/phpmyadmin.[DomainName].org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/phpmyadmin.[DomainName].org/privkey.pem;
server_name phpmyadmin.[DomainName].org;
root /var/www/phpmyadmin/;
index index.php index.html index.htm index.nginx-debian.html; #
location / {
index index.php;
}
## Images and static content is treated different
location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ {
#access_log off;
expires max;#30d;
log_not_found off;
}
#location ~ /\.ht {
#deny all;
#}
#location ~ /(libraries|setup/frames|setup/libs) {
# deny all;
# return 404;
#}
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/php/php7.3-fpm.sock; #127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;#/var/www/phpmyadmin$fastcgi_script_name;
}
}
/etc/nginx/sites-available/default:
##
# You should look at the following URL's in order to grasp a solid understanding
# of Nginx configuration files in order to fully unleash the power of Nginx.
# https://www.nginx.com/resources/wiki/start/
# https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/
# https://wiki.debian.org/Nginx/DirectoryStructure
#
# In most cases, administrators will remove this file from sites-enabled/ and
# leave it as reference inside of sites-available where it will continue to be
# updated by the nginx packaging team.
#
# This file will automatically load configuration files provided by other
# applications, such as Drupal or Wordpress. These applications will be made
# available underneath a path with that package name, such as /drupal8.
#
# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples.
##
# Default server configuration
#
server {
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
# pass PHP scripts to FastCGI server
#
#location ~ \.php$ {
# include snippets/fastcgi-php.conf;
#
# # With php-fpm (or other unix sockets):
# fastcgi_pass unix:/run/php/php7.3-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# Virtual Host configuration for example.com
#
# You can move that to a different file under sites-available/ and symlink that
# to sites-enabled/ to enable it.
#
#server {
# listen 80;
# listen [::]:80;
#
# server_name example.com;
#
# root /var/www/example.com;
# index index.html;
#
# location / {
# try_files $uri $uri/ =404;
# }
#}
posted
Thank you so much for joining the Nerd Club! I appreciate it a lot. I'm glad my videos were helpful to you.
Looking through your NGINX files, I don't see anything jumping out at me as being "wrong" or "not optimal" (keeping in mind that I look at Apache configuration more often than NGINX configuration.) The most important thing for a configuration file to do is to set the configuration how you need it, so if this configuration's working, it sounds like you've done everything right.
I haven't created a TURN server before, and I don't have much experience with Nextcloud Talk yet– I've primarily used RocketChat combined with Jitsi for chatting over the last couple of years. However, NOTS does have an internal Nextcloud server again as of a few weeks ago, and as great as Jitsi is, it does have quality issues from time to time, so I'll probably be checking Nextcloud Talk out sooner or later.