Automate LetsEncrypt SSL Certificate Renewals for NginX

For those in a rush: this blog post shows you how to use free SSL certificates and have then renew perpetually (in theory) so they are near zero hassle to use.

It is always nice to automate things. This saves you a lot of time and lets you concentrate on the things that actually matter and make a difference.

Renewing SSL certificates is one of those very important things that you don’t want to screw up and where, if you make a mistake, it could cost you dearly in terms of downtime or even money. Traditionally SSL certificates have been quite expensive so people have used HTTPS on their websites sparingly as a result.

Enter LetsEncrypt.

The Electronic Frontier Foundation; or EFF, came up with a great solution to this recurring problem. They became a certificate authority and provide the tools to, easily, automate the process of generation, signing, delivery and verification.

This project helps you encrypt communication between clients and servers via HTTPS. It is a necessary but not sufficient start in helping you achieve online anonymity; though not the only one required.

Best of all, the LetsEncrypt certificates are free which is very neat. The downside is they are only valid for a maximum of three months then need renewing. At CloudSigma we use these certificates but we have automated the renewal process so they are just as convenient as commercially paid for certificates to work with.

Here’s how to automate your deployment of these awesome certificates and, forget about this renewal overhead safely… at least for a while.

Installation of Certbot

First of all, you need to install the required software. It is easy enough in any major distros:

# CentOS7 / RHEL7
yum install certbot

# Fedora 23 or >
dnf install certbot

# Debian 8
apt-get install certbot -t jessie-backports

# Ubuntu 14.04/16.04
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbot

Configuration of NginX

Now, since NginX is not fully supported at the time of writing, we need to configure certbot to do all we want it to do and not suffer isues.

First, we will setup /etc/letsencrypt/cli.ini; which will contain all the default settings we want.

# create configuration dir
mkdir -m 775 /etc/letsencrypt

# create configuration file
cat < /etc/letsencrypt/cli.ini

# Settings
## General
no-redirect
rsa-key-size = 4096
text = True

## Plugin
authenticator = standalone
installer = null
standalone-preferred-challenges = tls-sni-01

## automation
agree-tos = True
renew-by-default = True

# Info
email = me@example.tld

EOF

Please, remember to update “email” to the best email for this from your side.

Now, everytime we run certbot, it will be configured to use:

  • standalone as the authenticator method
  • manual installation method
  • renew by default

So, the only thing left to provide is the domains. these were intentionally left out of the configuration (yes, you could add them there, if you like) and for a valid reason.

First of all, we’re using the standalone authentication method. This means that our NginX instance must be stopped so that this works. Don’t worry, it’s just a little bit of downtime depending on how many certificates you want to get.

So, next, we write the script that will renew/install the certificates for us and we will put it in: /usr/local/sbin/letsencrypt-renew; as per the Filesystem Hierarchy Standard 3.0.

#!/usr/bin/env bash

# stop NginX
systemctl stop nginx

# renew certificates
certbot -d example.tld -d www.example.tld -d downloads.example.tld -d mail.example.tld
certbot -d someother.tld -d www.someother.tld -d webmail.someother.tld

# start NginX
systemctl start nginx

Now, as you can see, you need to specify each and every sub-domain you will be using for a domain. The limit right now is 100 sub-domains per domain.

Keep in mind that:

  • The first -d will determine the name of the certificate file. So, in my case, I will end up with example.tld.pem and someother.tld.pem.
  • You can have 100 sub-domains in a domain; but you can, always, generate separate certificates. This means you can stick to one -d per call and you will get as many separate cert files as you need.
  • It is much faster to have as many bundled certificates in one file than have many cert files.
  • You can use these certificates for email.
  • Beware of doing this too often. You will get “rate-limited” if you do ;D.

Now, the cron job!

# invoke crontab as root
crontab -e

# do this on a weekly basis. You've got 90 days so you could try @montly as well.
@weekly /usr/local/sbin/letsencrypt-renew

Done.

Summary

Here’s what we did:

  • We installed certbot.
  • We configured it with some sane defaults.
  • We created a script that stops NginX, renews our certificates and starts NginX again (please note this only works only systemd-enabled systems)
  • We created a cron job to run that script on a monthly basis (yes the certificates are valid for three months but with a monthly renewal you leave a two month buffer in case the process fails before the current certificate will expire)

References

Source: CloudSigma.com

Udgivet i Automation, Blog Posts, certbot, certificate, https, Nginx, renewal, SSL, Tutorials

Millennials are blazing a trail to hybrid cloud

Innovate. Automate. Make it smart. Get to market fast. Its an exciting time for enterprise, but its also one of those be careful what you wish for moments. Ease of access to software as a service (SaaS), machine intelligence, and Internet of Things (IoT) encourages risk taking as much as innovation. Data and network security runs up against the Wild West of shadow IT. CISOs, IT admins, and even cloud providers face challenges in balancing compliance policies with the need for the speed, agility, and creative freedom that both large and small businesses have come to expect.

Two things are fueling the DIY disruption underpinning shadow IT: cloud computing and the working generation that understands it best, millennials. According to the Brookings Institute, millennial workers will make up 75 percent of the U.S. workforce by 2025. As the fastest growing demographic, millennials are influencing business models and expectations across the board. Four-fifths of todays employees admit to using SaaS apps that may or may not be part of their companies IT framework. While older workers are more likely to stick with company-managed tools and processes, theyre adopting some of the do-it-yourself approach of their younger colleagues to remain agile and competitive.

Millennials lean heavily toward public cloud adoption and innovative technologies, and they expect their employers to provide them with the flexibility and creative liberties to do their work. Millennials believe innovation is critical enough that they will take risks to achieve it, a great deal more risk than their Generation X and baby boomer counterparts will.

In the U.S., nearly nine in ten millennial workers believe its important to work for an organization that allows them to use open-source technologies. Theyre more likely to look for a new job if their companys IT is not as forward-leaning and creative as these workers see themselves.

Millennials are not inhibited about shadow IT, in fact, quite the opposite. Their make it so attitude toward the public cloud is fueling innovation and their productivity. Modern SaaS and business services are fast, easily onboarded, and increasingly sophisticated. More than ever, millennials are comfortable hosting their companys essential applications and services in the public cloud, and they expect to be able to do so.

None of this changes the fact that sustainable enterprise security requires deep visibility and control of assets. Once an organizations data or user identities are in the wild of the public cloud, IT doesnt have the visibility it needs to manage access and detect threats effectively.

While the traditional and understandable InfoSec response is to block access, this is no longer viable even in the short term. Blocking cloud and open-source solutions not only impacts innovation and productivity, it can disenfranchise a significant segment of the workforce.

So, how does enterprise IT solve the shadow IT conundrum?

In the short term, you can keep your network secure while supporting the spirit of innovation that millennial workers need. By implementing a solution that integrates with the cloud applications your people are using now, you can mitigate the insecurities of shadow IT while your organization develops its hybrid future. Microsoft Cloud App Security, for example, gives IT visibility into public cloud services like Box, as well as BYO and IoT devices.

Implementing a true hybrid cloud is the longer-term way out of the shadow IT wilds. In this effort, enterprise leaders should consider millennials superpowers, the value they put on innovation and their willingness to try new things and fail fast, as advantages and encourage a work culture that empowers employees to choose the cloud apps and services they want, without sacrificing the security and compliance your organization needs.

The transition to a hybrid cloud can present challenges for any organization. So to help, weve created a free cloud migration assessment. This tool will help you to see the value of a hybrid cloud and provide you with detailed information, such as cost estimates.

Resources


Source: Tech Net Microsoft

Udgivet i Uncategorized

Five reasons to run SQL Server 2016 on Windows Server 2016, part 5

Consistent data environment across hybrid cloud

Fruit

Have you ever seen a tree that simultaneously bears completely different species of fruit? Its a real thing: apples, plums, oranges, lemons, and peaches all growing on the same tree. The growers have the advantage of a consistent environment (the same tree) that allows them to be efficient with resources, pick the type of fruit they need when they need it, and always have the right kind of fruit without having to invest in specialized plants.

Those trees are like the consistent foundation shared by SQL Server 2016, Windows Server 2016, and Microsoft Azure: Common code underlying the Microsoft platform makes it possible to run your data workloads seamlessly on premises, in a hybrid environment, or strictly in the cloudand to pick the option you need, while moving easily from one environment to the other.

Common code = Unique value

The common code base creates a write-once-deploy-anywhere SQL Server and Windows Server experience. You have flexibility across physical on-premises machines, private cloud environments, third-party hosted private cloud environments, public cloud, and hybrid deployments. Figure 1 diagrams this unique platform.

Figure 1: Microsoft Data Platform: On premises, hybrid, and cloud

Image 1

This means that you can choose a hybrid deployment and take advantage of any of the four basic options for hosting SQL Server:

  1. SQL Server in on-premises non-virtualized physical machines
  2. SQL Server in on-premises virtualized machines
  3. SQL Server on Azure Virtual Machine: This is SQL Server installed and hosted in the cloud on Windows Server virtual machines (VMs) running on Azure. Also known as infrastructure as a service (IaaS), it is optimized for lift and shift of existing SQL Server applications to the cloud. All versions and editions of SQL Server are available, including free ones for dev/test and lightweight workloads.
  4. Azure SQL Database (Microsoft public cloud): This is a SQL Server database native to the cloud and is compatible with most SQL Server features. It is also known as a platform as a service (PaaS) database or a database as a service (DBaaS). It delivers all the agility and world-class security of Azure and is ideal for software-as-a-service (SaaS) app development.

When you run SQL Server on Windows Server, whether on-premises or in an IaaS virtual machine, you get the benefit of:

  • Improved database performance and availability with support for up to 24 terabytes of memory and 640 cores on a single server.
  • Built-in security at the operating system level. For example, when database admins can use a single Active Directory management pane across Azure and on-premises machines to set policies, enable/disable access, etc. it truly raises the security bar across the organization.
  • Simple and seamless upgrades with Rolling Upgrades
  • Ability to make SQL Server highly available on any cloud with Storage Spaces Direct to create virtual shared storage across VMs.
  • Access to new classes of direct-attach storage (ex. NVMe) for applications that require redundant storage across machines.
  • Reduce costs of hosting additional VMs by leveraging a Cloud Witness.

You benefit from the ability to use familiar server products, development tools, and technical expertise across all environments. No other platform delivers across this spectrum of implementations and builds in hybrid capabilities everywhere. Learn how to choose Azure SQL (PaaS) Database or SQL Server on Azure VMs (IaaS).

Free migration tools

Further easing the way to hybrid and cloud solutions are the SQL Azure Migration Wizard and other free migration tools. These are designed to provide easy migration of SQL Server and Windows Server 2016 workloads to virtual machines in the cloud.

When determining how much hardware to allocate for certain applications, downsizing datacenters, or migrating existing workloads to virtual machines (VMs), you can tap into cloud capabilities in several ways:

  • Backup to Azure, including, managed backup, backup to Azure Block Blobs, and Azure Storage snapshot backup
  • The Azure Site Recovery tool to migrate workloads on on-premises VMs and physical servers to run on Azure VMs, with full replication and backup, Azure IaaS VMs between Azure regions, and AWS Windows instances to Azure IaaS VMs. Easy addition of an Azure node to an AlwaysOn Availability Group in a hybrid environment.
  • Two new limited previews, Azure Database Migration Service and Azure SQL Database – Managed Instance create a great path for customers looking for a way to easily modernize their existing database environment to a fully managed PaaS service without application redesign.

SQL Server License Mobility and Azure Hybrid Use Benefit for Windows Server

Even licensing is designed to ensure that wherever you deploy, you can cost-effectively take advantage of all the options.

  • SQL Server customers with active Software Assurance can use existing licenses on Azure Virtual Machines with no extra charges to SQL Server licensing. Simply assign core licenses equal to the virtual cores in the VM, and pay only for VM compute costs.
  • License Mobility ensures you can easily move SQL Server databases to the cloud using your existing licensing agreement with active Software Assurance. No additional licensing is required for SQL Server passive high availability (HA) nodes, you can configure a passive VM with up to the same compute as your active node to deliver uptime.
  • Windows Server customers with Software Assurance can save up to 40 percent by leveraging on-premises licenses to move workloads to Azure VMs with this Azure Hybrid Use Benefit.

SQL Server 2016 with Windows Server 2016: Built for hybrid cloud

Microsoft continues to build in innovation so that organizations do not have to purchase expensive add-ons to get the benefits of the cloud with security, simplicity, and consistency across on-premises and the cloud. Together, SQL Server 2016 and Windows Server 2016 will bear fruit for your organization. Get started on hybrid now.

Learn more about SQL Server in Azure VM in this datasheet.

Try SQL Server in Azure.

See why Windows Server 2016 is the best choice for any platform on-premises or cloud.

Improve security, performance, and flexibility with SQL Server 2016 and Windows Server 2016

By running SQL Server 2016 and Windows Server 2016 together you can unlock the full potential of the Microsoft data platform. This series of blogs on five reasons to run these two new releases together barely scratches the surface. Whats the best way to find out just how powerful this combination is? Try it out! Download your free trial of Windows Server 2016 and SQL Server 2016 today.

Read more


Source: Tech Net Microsoft

Udgivet i Uncategorized

Windows Server for Developers: News from Microsoft Build 2017

This blog post was authored by Erin Chapple, General Manager, Windows Server.

On behalf of the Windows Server team, I want to send a warm welcome to the thousands of developers who are joining us this week for the Microsoft Build Conference. Its never been a more interesting time to be a developer, with new application models, patterns and frameworks changing how we work and build great applications. Organizations around the world rely on Windows Server to be a great operating system on which to run their applications. As applications patterns change, our team is right there with you to make sure you have the OS innovation you need.

Windows Server is joining the Windows Insider program

The last year was an incredible partnership with you as we finalized and launched Windows Server 2016. We progressed through five technical previews of Windows Server, shaping and refining the experience and functionality together. Simply put, this release would not be the same without the detailed feedback we received your partnership was critical to its success.

One common theme we heard again and again came from our customers you want access to Windows Server builds more frequently to test new features and fixes. Today I am pleased to announce that Windows Server is joining the Windows Insider program! Starting this summer, regular and frequent builds of Windows Server (including container images) will be available to all Windows Insiders who want to download and test them. As Dona Sarkar, who runs the Windows Insider program, recently said of the Insider community, These are the biggest fans of Windows I’ve ever seen in my whole life. With the addition of Windows Server, the community has even more reasons to be a fan.

Container-optimized Nano Server

The Nano Server container image has picked up speed quickly as the foundation for developers modernizing their existing applications as well as those building new applications. Customer adoption of containers has exceeded our expectations, and we are listening to the community to focus on the continued investments they would like in this area. With this upcoming feature release, we will build on the initial promise of Nano Server and focus on providing the very best containers foundation for developers.

We have been partnering closely with the .NET team to bring all the amazing .NET Core 2.0 work to containers with an optimized container image based on Nano Server. This work will help reduce the footprint of the .NET container image by at least 50 percent. For you this means reduced startup time as well as density improvements.

Windows Subsystem for Linux (WSL) on Windows Server

Last month in the DockerCon Keynote we demonstrated a Linux container running natively on Windows Server and we are continuing to make great progress with the Docker and Linux communities. One of the important aspects of this work is ensuring that customers have a great experience managing and building Linux containers. I am pleased to share that we are also bringing the Windows Subsystem for Linux (WSL), commonly known as Bash on Windows, to Windows Server. This unique combination allows developer and application administrators to use the same scripts, tools, procedures and container images they have been using for Linux containers on their Windows Server container host. These containers use our Hyper-V isolation technology combined with your choice of Linux kernel to host the workload while the management scripts and tools on the host use WSL.

Container Orchestration

Container orchestration is another area you asked us to improve. The first step in this is already available! Networking support for Docker swarm mode was made available last month, enabling efficient and simple networking across Windows and even mixed Windows and Linux OS clusters. We are continuing to work on two additional features requested by the Kubernetes community to improve Windows support on Kubernetes-based clusters. The first is the ability to add a network interface to an already running container and the second is the first step for sharing a network interface between two containers to support pods. We have also been working with several community members to understand and build support for mapping named pipes from a container host into a container. This enables specifically configured containers to communicate efficiently with the host and is how many orchestrators are deployed in Linux environments.

Container Storage

One of the most common container images we see used is for SQL Server, and with that has come questions around storage specifically where should I store my database within a container? While we have volume mounting support (the ability to connect storage from the host into the container) it was limited to locally mounted volumes. We are now adding the ability to map SMB file-based storage directly into a container. This will provide a valuable way to utilize the great file server enhancements in Windows Server 2016 along with containers.

Watch our Build session and sign up for the Insiders program

We cant do this alone! Your feedback and passion drives us to build better software. If you are coming to Build, be sure to catch the session B8013 Developing on Windows Server: Innovation for today and tomorrow – containers, Docker, .NET Core, Service Fabric, and more by Taylor Brown on Friday at 9 am. Or you can watch it streaming live.

Starting this summer, we will begin to post early builds of the new Windows Server features, including container-optimized Nano Server images to the Docker Hub, support for Linux containers, Windows Subsystem for Linux (WSL), better orchestration support and SMB storage for containers. Sign up now, get familiar with the site, and watch the Windows Server blog and the Windows Insider forums for the notice when the preview is available.

Aligned with the next release of Windows 10, these new features will be delivered as part of our first feature release this Fall. It will be available to customers with Software Assurance who commit to a more frequent release model. For customers who prefer the long-term servicing branch (LTSB) these features will be part of the next major release of Windows Server as well.

Its never been a more exciting time to be a developer, and I personally look forward to hearing your feedback. Check out our new Windows Server site on the Microsoft Tech Community, where you can give us feedback and share ideas with the rest of the Windows Server community.


Source: Tech Net Microsoft

Udgivet i Uncategorized

Customer Success Story with the European Space Agency

Customer Profile

ESA RSS (Research and Service Support) is a service provided by European Space Agency (ESA). ESA is Europe’s gateway to space. Its mission is to shape the development of Europe’s space capability and ensure that investment in space continues to deliver benefits to the citizens of Europe and the world.

ESA RSS service provides resources to support Earth Observation data exploitation. They support data access and data processing by bringing user algorithms close to the data, by providing various platforms and environments. RSS also supports the development of new applications and services to generate value added information derived from Earth Observation data.

By choosing CloudSigma, ESA RSS found a stable platform that is enabling them to supply consumers of Earth Observation data with great performance and availability. Additionally the RSS team have built a strong working relationship with CloudSigma staff who are on hand to assist them immediately. The ESA RSS team continues to grow their services hosted with CloudSigma over time.

“I find the live chat very useful, especially because the responsiveness is very high. We immediately get a response and the feeling for the user is very positive. You feel that there is always someone ready to intervene.”Giovanni Sabatino, Earth Observation Research Support Lead Engineer @ ESA

The Challenge

At an international institution such as ESA, the main purpose of the RSS service is to provide their customers with reliable and performant services. Most of their end users are researchers from universities and research centres, service providers, public institutions and also small and medium-sized enterprises.

“The main challenge at the beginning was to integrate the service. We needed to have something like a web portal interface with an engine, which is able to communicate with the cloud provider infrastructure via an API. During the proof of concept we defined a test plan, ran several tests and each of these tests had to pass.”

In order to manage the variable workload, and the diversity of processing required they decided to trust CloudSigma to satisfy different user requirements. This proved to be a very successful strategic decision for them that helped them to focus on their main responsibilities in terms of development and support and not get distracted by basic operational requirements.

“According to me, one of the main benefits is the fact that we get rid of the system administrator part and can focus on the application, on the algorithm development and on the user support.”

The Solution

Currently, RSS is using computing power at CloudSigma and satellite data stored on the clustered storage system to provide virtual environments where Earth Observation users can develop their algorithms or deploy their processes. By so doing the aim is to help researchers and service providers to speed-up their research saving time and costs.

“We need large amounts of high performance storage, because our data archive is very large and growing. By having our choice of good performance, cost effective storage or high-performance SSD storage, all of which is elastic and on-demand, we’re able to quickly and cost effectively deploy our platform, whose goal is to use earth observation data to protect both life and property from earthquake and volcanic hazards.”Giovanni Sabatino, Earth Observation Research Support Lead Engineer @ ESA

The scale-out magnetic storage provides ESA’s RSS team flexible and cost-effective storage with the necessary performance and scalability to run its analyses and store the satellite data and computational results.

More specifically the RSS Team hosts the Cloud Toolbox service (http://eogrid.esrin.esa.int/cloudtoolbox) that provides users with tailored machines hosted on CloudSigma for development, analysis and processing activities. The Cloud Toolbox virtual machine comes with pre-installed software and additional software can be installed based on user requests.

“The main advantage for the users is flexibility because the RAM, CPU cores and disk size can be tuned to users’ needs. Another advantage is of course the fact that due to the cloud nature of the service, the Cloud Toolbox is available and accessible from everywhere in the world.”

Before choosing CloudSigma ESA RSS Team evaluated a number of cloud providers. They performed tests for application compatibility, performance, customer service, API functionality and management tools. CloudSigma resulted to be one of the most reliable and flexible cloud providers.

“I would say that the web interface, the possibility to create templates and deploy many similar machines plus the responsiveness of the support were the main strengths observed when we evaluated the service.”

The Impact

Once the ESA RSS Team started using CloudSigma they could improve the quality of the offered services. Together with CloudSigma the RSS team are growing and steadily increasing their cloud deployment. The RSS Team don’t hesitate to recommend CloudSigma to people who are looking for a reliable cloud partner.

“With CloudSigma I have the impression that the VM I am working on is installed on my PC. For us CloudSigma is a success story.”Giovanni Sabatino, Earth Observation Research Support Lead Engineer @ ESA

“We would recommend CloudSigma as one of the best options because of the infrastructure usability, good user support and of course high availability of their VMs.”

Based on the overall experience that the ESA RSS Team has with CloudSigma, they are very satisfied with the services and collaboration and consider CloudSigma one of the best cloud providers on the market and a perfect match for their requirements.

Source: CloudSigma.com

Udgivet i Blog Posts, cloud performance, Cloud Services, cloud storage, Customer Success Story, earth observation, esa, satellite, space applications

Skill up with Know it. Prove it. challenges for IT Pros!

This blog post was authored by Matthew Calder, Senior Content Developer, Learning and Readiness Team.

enter-the-agency

Looking to augment your knowledge and expand your abilities? Check out the two IT Pro tracks in the latest Microsoft Virtual Academy (MVA) Know it. Prove it. challenge, underway now and wrapping up on May 17! Pick up valuable skills quickly through video tutorials, demos, assessments, and more. Learn something new, get the career edge you need, and earn badges to share with the world! Plus, when you take on a challenge, you become part of the Agency, a fictitious organization of technological masterminds who use their skills for good.

  • Azure for IT Pros: Gain confidence in your Azure skills! In this challenge, get up to speed quickly and efficiently on the latest infrastructure features of Azure, including networking, storage, and virtualization. Expert Corey Hynes takes you through this comprehensive and authoritative series.
  • IT Pro Fundamentals: Looking to join the exciting world of information technology professionals? Businesses depend on IT Pros now more than ever, and people with IT administration and management skills are indispensable. In this challenge, get a grasp on the basics, managing and securing networks.

Interested? To get started, register for MVA if you haven’t already, add a challenge to your MVA dashboard, and dive in!

Whether you’re just beginning a career as an IT Pro or need to add knowledge of the latest Azure functionality to your skill set, one of the Know it. Prove it. challenges can help. The courses remain on your dashboard until May 17. To earn a badge, be sure to complete your challenge before then. Think you might need more time to learn? Create your own playlist, and add it to your MVA dashboard.

You’ve got the smarts. Now show the world.

Accept a challenge!


Source: Tech Net Microsoft

Udgivet i Azure, Uncategorized

How To Build Your Own Private Dropbox Style Server

Have you heard of Dropbox? Silly question, right? Everyone knows what Dropbox is: the little icon, the magic folder, the synchronization of data between all of your devices. Dropbox is the poster child for cloud services, but as awesome as their service is, they bring up another issue.

How secure is your data?

Dropbox has had more than one security breach in the past, exposing passwords and allowing unauthenticated users access to data. In the case of the password breach, Dropbox didn’t know about (or if they did, they didn’t report it) for more than four years. A total of 68 million passwords were leaked onto the Internet.

I’m not picking on Dropbox here. Internet security is hard stuff, and software developers are human beings who make mistakes. Other providers of cloud services have also seen their fair share of issues, and in the rush to move everything out to the cloud, end users haven’t been given good options.

The security of your data isn’t just about keeping it out of the hands of nefarious evildoers. It’s about keeping your affairs private. On March 29, 2017, the lawmakers of the United States voted overwhelmingly to undo privacy legislation passed by the previous administration. This legislation was designed to prevent internet service providers such as Comcast, AT&T, and Verizon from monitoring the Internet activity of their subscribers and selling that information to advertisers. The affected providers lobbied against the legislation, saying that it unfairly excluded companies like Google and Facebook.

The difference that the ISPs overlooked is that people have a choice of whether to use Facebook or Google, and by using their free services, they consent to having their information collected and sold. However, a person must go through an ISP to reach the Internet, and this gives the ISP unprecedented access to information about the person. Customers pay for service from the ISP, and the ISP is still collecting and selling personal information about those customers for use in advertising and data mining of offline activities. What you search for on Google may now be visible to your bank and used to determine your eligibility for credit or loans.

Do you think you’re safe because you don’t live in the US? You’re not – a huge portion of data that traverses the Internet goes through cables that pass through the United States. If you’re in Latin America and visit a site in Europe, your data traversed the United States. If you think that businesses won’t collect that data because it’s against the law, you’re wrong. The value of the data they sell exceeds any penalty they might incur for doing so. This is the world in which we live, and this is what we must fight against.

Turning back to Dropbox, they use HTTPS and assert that data is encrypted “at rest,” which means that when your data is on their servers, it’s encrypted. If someone came in and stole a server, they wouldn’t be able to access the information stored on its drives. The problem with this is that Dropbox knows the key. No one goes in and steals servers for the data. All compromises are digital or from within, and in both cases, the key is visible in RAM or in a configuration file or database somewhere.

What if Dropbox indexes your files and sells that information to advertisers?

They say that they don’t, and maybe they’re telling the truth. Whether or not they do it is not the question. The question is whether or not they can, because anything that a business can do to make money is on the table. Everyone else is doing it, and all it takes is a change in their terms and conditions or privacy policy, presented to you when you next log in, and if you don’t accept it, you can stop using the service.

This is the critical piece of information to consider when using cloud services:  they own your data. In the case of Dropbox, at least you have a copy of it locally, but there’s nothing to stop their app from deleting it or locking you out of it if you choose to terminate the relationship. No company has any requirement to return your data to you, because once you’ve entrusted it to them, it belongs to them. The laws in the US specify that information on a company’s servers, network, or property belongs to them. Antiquated privacy laws state that once you entrust your data to a third party you no longer have a reasonable expectation of privacy. This means that a company who chooses to hand over data without a search warrant is not in violation of the law. Your data isn’t yours.

It’s not all doom and gloom, though. If you’re reading this, you’re either a technical person or you know someone who is. Every service that you use from every provider has an alternate self-hosted version. They may not have the same polish and universal integration (because the Beast seeks to feed itself), but they’re usable and give you control over your data, where it lives, who can access it, and how it’s used.

I encourage you to set up your own alternate services and make these available for your friends and family who are non-technical. People are concerned about online privacy but don’t know what to do about it. By giving them a choice, we return power to the place where it belongs;  in the hands of the people.

Seafile: Your Own Private Dropbox

The rest of this post is going to show you how to install Seafile on an instance out at CloudSigma, protected via SSL offloading delivered by HAProxy.

Seafile is a file management service that functions like Dropbox and has features that Dropbox doesn’t:

  • Create multiple libraries, each with their own rules for sharing
  • Create groups or organizational libraries
  • Connect the client to multiple seafile instances
  • Encrypt data from end to end
  • Store as much data as you want
  • Create a wiki for groups to collected knowledge

Seafile is free and open source, suitable for individuals and businesses, and has a paid, commercial offering that includes additional features such as integration with external authentication providers and the ability to use shared storage and run with high availability. They have apps for desktop and mobile, and if you don’t want to host your own instance of Seafile, they offer a cloud service too.

Running Seafile Under Docker

Seafile is one of those applications that doesn’t fit well into Docker. The entire application needs to be persistent, so the container exists solely to run the application and keep it in an isolated process space. Once it’s running, it runs great, but we’ll need to jump through some hoops to get it started.

We’re going to use this Docker repository for our Seafile install.

Bring Up A New Instance

Bring up a new instance in CloudSigma. For this post I’ll be using Ubuntu 16.04 LTS (Xenial), which you can choose from the drive library. I’ve created a cloud-init file that will update the system, install our dependencies, install Docker and HAProxy, pull the container image, and reboot the system to activate the changes. You can retrieve it from my Gitlab environment here. Paste that in as your custom cloud-init config, attach your SSH key, and start the instance. If you want to follow the progress, SSH into it and run tail -f /var/log/cloud-init-output.log. The system will announce when it’s ready to reboot, and after it reboots, you can log back into it.

Perform Initial Seafile Install

Because Seafile doesn’t fit well into the Docker model, the first run requires manual installation. The steps that are covered in the Docker repo don’t take advantage of a Seafile change from version 4 that removed the two high ports and shifted synchronization to HTTP under a dedicated path. They also don’t consider how to install a reverse proxy such as nginx or HAProxy. We’ll follow a similar route, but with slight differences.

  1. Run the container with an active shell, mounting our persistend volume from /opt/docker/seafile as /opt/seafile in the container.
    $ sudo docker run -it --rm -v /opt/docker/seafile/seafile-opt:/opt/seafile 
    jenserat/seafile -- /bin/bash
    *** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
    *** Running /etc/rc.local...
    *** Booting runit daemon...
    *** Runit started as PID 7
    *** Running /bin/bash...
    root@b00f5cce813b:/#
  2. Run download-seafile to install and setup Seafile.
    • This will download the latest version of Seafile (currently v6)
  3. Go through the Seafile installation
    root@b00f5cce813b:/# /opt/seafile/seafile-server-6.*/setup-seafile.sh
    -----------------------------------------------------------------
    This script will guide you to config and setup your seafile server.
    
    Make sure you have read seafile server manual at
    
    	https://github.com/haiwen/seafile/wiki
    
    Note: This script will guide your to setup seafile server using sqlite3,
    which may have problems if your disk is on a NFS/CIFS/USB.
    In these cases, we sugguest you setup seafile server using MySQL.
    
    Press [ENTER] to continue
    -----------------------------------------------------------------
  4. Answer the questions that it asks regarding your installation. Select a server name and hostname that you can configure in DNS. Use the defaults for the last two questions.
    What would you like to use as the name of this seafile server?
    Your seafile users will be able to see the name in their seafile client.
    You can use a-z, A-Z, 0-9, _ and -, and the length should be 3 ~ 15
    [server name]: seafile
    
    What is the ip or domain of this server?
    For example, www.mycompany.com, or, 192.168.1.101
    
    [This server's ip or domain]: seafile.example.com
    
    Where would you like to store your seafile data?
    Note: Please use a volume with enough free space.
    [default: /opt/seafile/seafile-data ]
    
    What tcp port do you want to use for seafile fileserver?
    8082 is the recommended port.
    [default: 8082 ]
    
    
    This is your config information:
    
    server name:        seafile
    server ip/domain:   seafile.example.com
    seafile data dir:   /opt/seafile/seafile-data
    fileserver port:    8082
    
    If you are OK with the configuration, press [ENTER] to continue.
  5. Press Enter after the next block of text for the installer to set up the web portion. This doesn’t ask you any questions.
  6. The installer will exit. The information that it shows you assumes that you’re running Seafile directly on a server, not within Docker. As long as it outputs the text below, you can type exit to exit the shell.
    -----------------------------------------------------------------
    Your seafile server configuration has been completed successfully.
    -----------------------------------------------------------------

Configure HAProxy

We’ll use HAProxy to handle SSL termination for the Seafile clients. It will communicate with Seafile on localhost and proxy the two routes that Seafile uses to operate: one for the web interface and the other for file synchronization.

HAProxy was installed when we brought up the instance, and it copied this configuration file into place. The configuration file is using the domain seafile.example.com, so if you’re setting up your own instance on a real hostname, search /etc/haproxy/haproxy.cfg for this hostname and change it to yours.

The installation also downloaded a self-signed certificate for seafile.example.com and installed it as /etc/haproxy/server.pem. If you have a real SSL certificate, put it in this location. HAProxy wants a single file with the certificate, any additional chain certificates, and then the key.

If you change the key or the config, reload HAProxy:

$ sudo systemctl reload haproxy

To see what HAProxy is up to, you can use hatop:

$ sudo hatop -f /var/run/haproxy/haproxy.sock

Press q to exit.

Start Seafile

The installation brought down a Docker Compose file that will start the container and an outbound mailserver (needed to send password reset information). If you’re using a real hostname, edit this file and change seafile.example.com to your actual hostname.

Use that Compose file to start the container now:

$ cd /opt/docker/seafile
$ sudo docker-compose up -d

If everything was set up correctly, you’ll be able to see Seafile and Postfix running in the output of sudo docker ps.

$ sudo docker ps --format "table {{.ID}}t{{.Names}}"
CONTAINER ID        NAMES
c72f3f911579        seafile
a3849a4165f5        seafile_postfix_1

Create the Seafile Admin Account

Seafile doesn’t create an admin account for you, so when you first run it, you’ll have to complete the following steps.

$ sudo docker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh
E-mail address: admin@example.com
Password:
Password (again):
Superuser created successfully.

Log Into Seafile

If you’re using seafile.example.com as your URL, you’ll have to add this to /etc/hosts with the IP of your instance:

{your_instance_ip} seafile.example.com

When you’ve done this, or if you’re using a real hostname, you can go to https://seafile.example.com (or your actual host) in a browser, log in with the admin account you created above, and get started with your own secure file storage solution!

Next Steps

To really make use of Seafile, install the desktop and mobile clients. Create users for your friends and family, create libraries, and start synchronizing data. If you’re serious about leaving Dropbox behind, move everything out of Dropbox that you don’t need to have out there. Some applications, like 1Password and YNAB, store their synchronization data in Dropbox because it integrates natively with mobile devices. Make sure that you only use apps that encrypt this data so that it’s not at the mercy of Dropbox or anyone else.

Wrapping Up

Taking back control of your data is a process that never really ends. Businesses exist to make money and are only beholdent to their investors or shareholders. Any service that you receive for free is making their money elsewhere, most likely by selling your personal information. Whenever possible, choose ethical service providers and work to keep the Internet doing what it was created to do, to give control of information to the people.

Source: CloudSigma.com

Udgivet i Blog Posts, cloud storage, dropbox, Encryption, privacy, private storage, seafile, Tutorials

Customer Success Story with BOC Group

Customer Profile

BOC Group is an international leader in software and consultancy providing products and services for Business Process Management (BPM), Enterprise Architecture Management (EAM) and Governance, Risk and Compliance Management (GRC). BOC Group implements the management strategy of its clients and creates value for their business and IT. Headquartered in Vienna, BOC Group has subsidiaries in Germany, Spain, Ireland, Greece, Poland, Switzerland and France and services a worldwide customer base.

In a nutshell, BOC Group has completed over 35.000 installations and has got 20 years of experience all achieved with zero outsourcing and with 100% performance.

The partnership of BOC Group with CloudSigma enabled the company to offer an innovative software as a service offering to its clients and to grow its business with high agility due to in time scalable infrastructure. The flexibility of CloudSigma allowed them to design a customer-orientated IT infrastructure setup corresponding to the exact needs of their application in a cost-efficient way. This impacted the whole scope of BOC group’s business by increasing customer satisfaction and winning new clients. BOC Group now plans to further grow their business by taking the opportunity to deploy their service in other CloudSigma locations around the world with the same ease.

“Designing and implementing our architecture on the CloudSigma platform serves our exact needs and allows us to grow our cloud service portfolio really fast and efficiently.”Stepan Seycek, Head of Cloud Services @ BOC Group

The Challenge

The core business of BOC Group is to provide its clients with software tools for globally recognised management approaches complimented by consulting services based on these tools.

Initially the company offered its products as desktop applications. In order to capture opportunities in the highly dynamic and innovative software market, BOC took the decision to transition to web technologies. The development of the web based stack architecture allowed BOC to explore a new buisness opportunity by providing their software as a cost effective and dynamically scaling SaaS service. Additionally if customers wanted to avoid costs and efforts in their own inrastructures BOC Group offered a hosting service that allowed tool usage immediatly – deploying an instance of the software in BOC Goup’s in-house infrastructure which is accessible to the customers over the internet – in this way extending their business model. At the same time cloud offerings have emerged and developed and hence the company decided to create an innovative, highly competitive cloud-based product in order to market their new services and improve service quality and agility. This is how the ADONIS NP Starter Edition was born – a SaaS product based on the ADONIS software tool.

“Initially the hosting services for our software tools were only an add-on offering to our core licence sales business. But with time it has developed and in order to be able to market our innovative service offering we introduced what we called ‘ADONIS:cloud’ at that time.”

To be able to provide ADONIS NP Starter Edition to new clients, BOC addressed the challenge to extend the product to incorporate multi-tenancy and they also had to solve the challenge of very dynamic infrstructure requirements due to the dynamic usage of the SaaS product line.

“We didn’t want to go with any proprietary solutions that you can get with vendors like Amazon or Azure. They introduced something like private networks, but then you have to configure their proprietary gateway services to connect these networks with each other. That was something that we wanted to avoid in order to stay portable and have platform independency.”Stepan Seycek, Head of Cloud Services @ BOC Group

“Before the introduction of ‘ADONIS:cloud’ we haven’t faced the challenge of providing multi-tenant systems. On the other hand, we had to think about how to deal with infrastructure, because the in-house provisioning was limited in terms of capacity and availability.”

The company took the strategic decision to externalize the infrastructure behind the ADONIS NP Starter Edition and to move existing hosting customers to this external infrastructure. After considering the option to do a private cloud with an infrastructure provider, BOC Group quickly decided to go for an Infrastructure-as-a-Service (IaaS) partner since they didn’t want to build up the organisation for maintaining hardware and virtualization in an external data center. They wanted to focus on their core business and value proposition.

The Solution

BOC Group used a decision making framework of MODAClouds to whittle twenty potential cloud providers down to just three. The criteria included many technical and non-technical considerations weighted by relative importance to BOC Group. After shortlisting three top providers the next step for BOC Group was to actually test them. This included both larger and relatively small cloud providers. It was part of BOC Group’s business continuity strategy to work with two different providers – a primary provider and an additional provider for disaster recovery purposes. Based on the best overall scoring, they chose CloudSigma as the primary site for their SaaS platform and a second laaS provider from Germany for their secondary site for disaster recovery.

There were several critical things for BOC Group that CloudSigma managed to provide: CloudSigma enabled them to build their desired tailor-made architecture, this allowed BOC Group to pass through their experience and expertise and preserve performance, cost-efficiency and scalability. The architecture was as close to the same setup running on hardware as possible. With CloudSigma the team managed to design an architecture with multiple private networks sat behind a custom and portable firewall and to ensure their platform was vendor independent – something that they couldn’t achieve with most of CloudSigma’s competitors.

The architecture allows their applications to scale horizontally while a couple of central services can scale vertically. This corresponds exactly to the way their application works and is therefore very efficient. The centralization of some servers allows for reducing efforts in maintaining the application and is cost efficient by reducing the number of licences that are measured on a per core basis.

A key aspect that BOC Group appreciates is the flexibility and unbundling of resources within the CloudSigma platform:

“I would say that everybody interested in providing SaaS on IaaS should think about the degree of flexibility that they need, especially in allocating compute resources. With CloudSigma it’s totally flexible compared to other vendors like Azure or AWS where you have machines categorized by size and different other criteria and it can really get complicated if you look at their price lists.”

CloudSigma also managed to achieve the critical system performance needed for highly interactive business applications which generate a lot of transactions on BOC Group’s database requiring a really high level of performance on storage and networking. CloudSigma could provide a high number of IOPS that the BOC system needs combined with low latency and great CPU performance to serve BOC Group’s customers with an optimal system perfromance.

Besides the outstanding technical performance, BOC Group values the high level of trust and the collaborative relationship they have with CloudSigma’s support and operations teams.

“Support – that is something that is a really big plus of CloudSigma. You can reach out for somebody and you have a support engineer available in under a minute. Testing the support capabilities of IaaS providers will show you quite fast how you will be served in the future.”Stepan Seycek, Head of Cloud Services @ BOC Group

Since the main business and the customers that the BOC Group serves are based in Europe – mainly Germany, Switzerland, Austria and France – for the company it was also very important to go with a provider with good European locations. Especially for BOC Group’s hosted services it turned out that Switzerland was a good overall location choice since the majority of the company’s customers was willing to use a SaaS solution hosted there.

“Providing SaaS applications that allow clients to store personally identifiable information requires exact transparency and strong guarantees regarding how data is treated. For this reason we strategically decided to go for a European company – this is another key point for CloudSigma.”

The Impact

The deployment of their application on CloudSigma led to several business improvements for BOC Group. Firstly, they could achieve higher availability of their hosting services. Performance was no longer a bottleneck and their clients immediately noticed the gain in service quality.

Secondly, the solution offered the scalability and agility they needed for their software-as-a-service offering. The process of establishing an additional server that took one to two days and had to be arranged manually before they moved to CloudSigma could now be achieved in seconds in an automated way.

“One very important point, on which we get feedback from our internal ‘clients’, i.e. our colleagues, is the agility that is possible with this IaaS approach. It helps us to be really quick at providing accounts for our clients, which was not possible before.”

Overall, the transition to the cloud allowed BOC Group to accelerate their business grow to the point that now they even have a dedicated team operating their growing IaaS environment.

“Our SaaS business started growing when we started on IaaS. Before that we had a stable base of customers on our internal hosting. Now we face a growing demand resulting from digitization initiatives and increased acceptance for cloud computing and the scalable model of our SaaS platform and CloudSigma’s IaaS platform lets us convert it to growing business.”

“I really appreciate that CloudSigma tackles issues at the core and works on eliminating them at the core.”Stepan Seycek, Head of Cloud Services @ BOC Group

The Future

The cooperation of BOC Group with CloudSigma opens new doors for strategically growing their SaaS business. What started as ‘ADONIS:cloud’ became ADONIS NP Starter Edition in 2016. Along with the ADONIS NP Enterprise Edition for the fully customizable BPM solution, BOC Group’s BPM-SaaS offering covers a wide range of customer scenarios. BOC Group are now considering further deployments in other locations offered by CloudSigma. Industrial clients with high security standards can rely on the Swiss location, however, some clients, for example those from certain public administration sectors in Germany, cannot pick a SaaS solution located outside their own country. CloudSigma’s Frankfurt location and others offer to BOC Group a relatively painless way to better service new customers such as these without managing new physical infrastructure locations themselves. Another market that BOC Group approaches is the US where demands for a SaaS solution from their clients is growing. Although BOC Group currently serves some accounts in the US, they plan to take the opportunity and expand further in this market benefiting from the global deployment options that CloudSigma offers.

Source: CloudSigma.com

Udgivet i applications, Blog Posts, cloud, cloud-based, Customer Success Story, growth, potential, SaaS, Scalability, software-as-a-service

Three reasons to watch my free webinar on what’s new in Hyper-V

This post was authored by Jeff Woolsey, Principal PM Manager, Windows Server.

In case you missed my recent webinar, Software-define your infrastructure with Hyper-V using Windows Server 2016, dont worry, you can still watch it for free!

  • Solutions, not features. Covering all of the exciting new features in Windows Server 2016 would take a couple of days. Since I only had one hour, I decided to focus on three areas that I know are concerns for many of our customers. Youll learn how deploying Windows Server 2016 can help you boost performance, increase reliability, and gain flexibility. Youll also learn how your on-premises environment can benefit from the Azure technologies weve baked into the OS.
  • Weve got demos! Like the saying goes, show, dont tell. In this webinar youll see how to balance VM loads like a boss and upgrade VM clusters with zero downtime. I literally pull the plug on 500 guest VMs so you can see how easy it is to protect your data with synchronous storage replication.
  • Its useful. You dont have to take my word for it. We asked attendees how useful this webinar was to them, and 88.9% of the respondents said it was useful or very useful. (By the way, did I mention that its free?)

Check out the free webinar. I hope youll find the hour a good use of your time let me know either way using the comments section below, or hit me up on Twitter, @WSV_GUY. Also, let me know what other webinars would help you understand how to software-define your infrastructure with Windows Server 2016 and Hyper-V.


Source: Tech Net Microsoft

Udgivet i Uncategorized

How to Sync Subscriptions to Fall on the Same Day

CloudSigma offers to users the possibility to make as many subscriptions as they wish for the exact amount of resources they need. Your subscriptions can be configured in a flexible manner along with everything else on CloudSigma. The idea is you can flexibly combine burst usage with subscribed resources to closely match your requirements over time.

Most of our customers end up with several subscriptions for a single resource and with various expiry dates as their consumption grows over time (think of them stacking up on top of each other). Having to maintain multiple subscriptions per resource with different expiry dates can become a bit of a headache for many customers. But here comes the trick: you can sync up your subscriptions per resource to fall on the same day, after which you can consolidate them into a single subscription going forward.

By default, you can create subscriptions for 1 month, 1 year or 3 years subscriptions. When you extend your subscriptions however, you have the option to extend them for a custom period of your choice as long as it’s at least 30 days. This option is quite useful in this situation so we can sync up our subscriptions to fall on the same day.

Below we walk you through exactly how to sync up your subscriptions so you can more easily manage your resource consumption and funding your account.

Step 1: Look at the expiry dates of your active subscriptions and see which subscription will expire the last

Go to the WebApp “Usage & Billing” >> “Subscriptions” and look at the table with your subscriptions:

Let’s assume you have several RAM subscriptions that all expire on different dates. You should look at their expiry dates and identify which one will be active the longest.

In the given example the subscription that expires the latest is RAM Subscription 3 which expires on 17-May-17. We would like to have only one subscription for the RAM resource which renews every three months.

Step 2: Extend the subscriptions you want to sync to match the last subscription expiry date

To extend a subscription click on the “Details” button next to it. Then set the custom expiry date. In the given example we do this procedure for RAM Subscription 1 and RAM Subscription 2 and we extend them to fall on 07-May-17. When extending subscriptions make sure that the Auto-Renew option is switched off. Otherwise, the subscriptions will be automatically extended.

Please note: If your latest subscription expires in less than a month, you need to first extend it with the minimum of one month and then sync the rest according to its new expiry date.

Step 3: Now all of your subscriptions will expire on the same day. The last step is making a new consolidated subscription from this day on for the cumulative amount of the resource that you need and turning the Auto-Renew option on

Since in the previous steps the subscriptions were extended by different time frames, now we need to aggregate them in one new subscription for the cumulative amount of the resource which will automatically renew for the desired period going forward.

In our example, we assumed that we need a subscription for the RAM resource for three months. In Step 3 we purchase RAM Subscription 4 which sums up all the RAM (10GB + 2GB + 2GB = 14 GB) that we need. We set it to be for 3 months and we turn the Auto-Renew option on:


Et voilà!

The same procedure can be repeated for any other resource subscriptions you want to sync up.

If you need any help with syncing your subscriptions do not hesitate to contact our sales team at sales@cloudsigma.com.

Source: CloudSigma.com

Udgivet i Blog Posts, cloud features, cloud subscriptions, public cloud, sync subscriptions, Tutorials, user guide
Top