Three ways to better secure your hybrid environment

As more organizations adopt a hybrid cloud model for IT, its no surprise theyre encountering new security challenges. With more surface area to cover, more mission-critical assets to protect, and more sophisticated threats to defend against, security issues become increasingly complex.

And with the average cost of a data breach to a single company now $3.8 million and rising, its easy to understand why security remains top of mind.

To help, weve created a webinar that provides suggestions on addressing these new security challenges in a hybrid cloud environment. The webinar also offers a high-level overview of the changing role IT operations management plays in security.

There are essentially three areas where hybrid cloud management and security can make a real difference for your organization.

1. Bring IT and security operations together

Often IT operations are working with one set of tools and procedures while security teams use an entirely different set. This lack of integration between the systems that detect threats and the systems that respond to them means it now takes 146 days for organizations to discover a data breach.

Bringing IT and security together can have a profound impact on your efficiency in fighting security threats. For example, IT may be investigating a performance issue that, at its root, really turns out to be a security issue like a brute force attack. With an integrated approach, you can quickly pass off that information and take action.

2. Ensure good security hygiene

Once you have your team in place, its time to get your data in line. With your data in one place, you can more easily tackle the basics that sometimes get overlooked. Things like identifying systems that have missing or outdated security updates and incomplete configurations, and taking a hard look at anomalous network traffic and user behavior.


3. Enable rapid threat detection and response

The advantage of having your teams integrated and your data in one place is that you can become more efficient in how you handle security events. You can improve your threat detection capabilities and reduce the amount of time it takes to investigate and recover from attacks. With the right tools, you can quickly search data, see a map of suspicious traffic, understand a single threat, or get a comprehensive picture of your entire system.

Watch Three Ways to Better Secure your Hybrid Cloud Environment

We cover these points in greater detail and a whole lot more. Learn about all the steps you can take to get better visibility of your security posture and how Operations Management Suite can help.

Watch the webinar

Source: Tech Net Microsoft

Udgivet i Events

Managing StorSimple virtual arrays in the new Azure portal

This post was authored by Manu Aery, Senior Product Marketing Manager, ECG

Were happy to announce that the management of the StorSimple Virtual Device Series is now available in the new Azure portal. You can use the StorSimple extension in the new portal to create Azure Resource Manager based StorSimple managers to manage your virtual arrays.

Whats new?

  • Enhanced user experience and improved navigation
  • Optimized and multiple workflows for efficient task completion
  • Integrated Support and Diagnostics experiences
  • Support for inbuilt Azure roles and ability to manage access through custom roles

How to get started

You can create a new StorSimple Device Manager in the Azure portal to manage your virtual arrays by navigating to: + NEW > Storage > StorSimple Virtual Device Series.

StorSimple Device

You can register one or more virtual arrays to this newly created StorSimple Device Manager by navigating to the specific Manager > Resource menu > Quick start to download and register a new virtual array.


Additionally, by navigating to Browse > Filter on StorSimple Device Managers, you will be able to:

  • View and manage all StorSimple Device Managers created in the new portal.
  • View all StorSimple Device Managers created in the classic portal. However, you will continue to manage these resources through the classic portal, until we migrate them to the new portal. More information on migration to the new portal is covered further in this article.

StorSimple Device Manager Filer items

Managing your StorSimple virtual arrays in the new Azure portal

The enhanced user experience makes it easy to manage your virtual arrays within the new Azure portal.

  • The resource menu contains all the options to manage, monitor and troubleshoot your virtual arrays.
  • Some of the frequently performed operations on the virtual array are easily accessible through the top-level command bar.
  • The StorSimple service summary blade provides aggregated information across the virtual arrays in a particular resource. This blade is designed to give you a quick summary on usage, alerts, etc., and serve as the starting point to deep dive into further details, both from the tiles on the blade as well as from the resource menu on the left.
  • Additionally, you can diagnose and potentially resolve common issues with your virtual arrays through the troubleshooting content that is available right within the Azure portal. You can also log a support ticket to request assistance from Microsoft Support.

To learn more about how to manage your StorSimple Virtual Arrays in the portal, please refer to the product documentation.

StorSimple Manager

Migration of StorSimple Virtual Device Series resources from the classic portal

Your existing StorSimple Virtual Device Series resources in the classic portal will be migrated to the new Azure Portal in the coming weeks. We will reach out to you with more details on the date as well as the details of the migration. Stay tuned!

Please note this migration will be seamless and there will be no downtime to your virtual arrays. Once the migration is complete:

  • All your StorSimple Virtual Device resources in the classic portal will be managed through the new Azure portal.
  • The StorSimple Virtual Device Series management will no longer be available on the classic portal.
  • The StorSimple Physical Device Series will continue to be managed via the classic portal. You will be able to view your StorSimple Physical Device Series resources in the new portal, but you will continue to manage them from the classic portal. We will keep you posted about the transition of the physical device series to the new Azure portal.

For more information on the new portal, refer to the blog post, which compares and contrasts the user experience in the new Azure portal and the classic portal.

To learn more about how to manage your StorSimple virtual arrays in the portal, please refer to the product documentation.

Source: Tech Net Microsoft

Udgivet i Update

Introducing Windows Server Premium Assurance and SQL Server Premium Assurance

New offerings and Software Assurance essential to your ongoing digital transformation

This post was authored by Mark Jewett, Senior Director of Cloud Platform Marketing, and Tiffany Wissner, Senior Director of Data Platform Marketing.

IT today is in a fundamental state of change, spanning between enabling digital transformation to make their company more competitive, while also maintaining stable and secure operations of existing systems. As part of this changing transformation, many applications are moving fast to the cloud, taking advantage of the agility, scale, and innovation it offers. Some applications arent ready to move yet, but require modernization to better utilize more agile development processes and further the organizations digital goals. Some mission critical and legacy applications just need to keep running without disruption. Your portfolio of applications has a diverse set of needs that will continue to evolve over time.

To provide you with further flexibility as your needs evolve, today we are announcing two new offerings to help you run applications even longer without disruption: Windows Server Premium Assurance and SQL Server Premium Assurance. These offerings add six more years of product support for Windows Server and SQL Server, allowing for a minimum of 16 years of total support (five years each of Mainstream and Extended Support, plus the new Premium Assurance period). The additional support period provides Security Updates and Bulletins rated Critical and Important (see the Security Bulletin Severity Rating System for definitions) for both products. This helps you continue to meet compliance requirements and ensure security on systems you arent ready to update. Furthermore, greater peace of mind on those applications allows you to focus your energy on the applications more core to your digital transformation.


Premium Assurance extends the product lifecycles for Windows Server and SQL Server

You can purchase Windows Server Premium Assurance and SQL Server Premium assurance separately or together, and both offerings will be available starting in early 2017. Premium Assurance pricing will start at 5% of the current product license cost, and will increase over time (up to 12%). Buying before the end of June 2017 means you will save nearly 60% on the cost of Premium Assurance. The first versions covered by Premium Assurance will be SQL Server 2008 and 2008 R2 (Extended Support ends in July 2019) and Windows Server 2008 and 2008 R2 (Extended Support ends in January 2020).

Windows Server Premium Assurance and SQL Server Premium Assurance are purchased as add-ons to active Software Assurance for each product. In fact, Software Assurance is an essential investment in the journey to more broadly transform your application portfolio, providing the following notable benefits (and more!):

  • To help modernize applications, Software Assurance gives you access to the latest Windows Server and SQL Server innovation at no additional cost. The next version of Windows Server is currently in planning, and will further advance the Windows Server 2016 capabilities around hybrid cloud, multi-layered security, containers, Nano Server, and software-defined datacenter technology. The next version of SQL Server, already available in preview, brings the performance and security of SQL Server to Linux.
  • For applications moving to the cloud, Software Assurance provides unique flexibility and savings benefits. The Azure Hybrid Use Benefit provides up to 40-50% savings on Windows Server virtual machines in Azure. Similarly, License Mobility provides the flexibility to deploy existing SQL Server licenses in the cloud without additional fees.

To learn more about Windows Server Premium Assurance and SQL Server Premium Assurance, and the six additional years of product support they provide, read the datasheet. The new offerings will be available early next year. In the meantime, buying and renewing Software Assurance with all your Windows Server and SQL Server purchases continues to be the best way to get the latest innovation and most flexibility for your application portfolio.

Source: Tech Net Microsoft

Udgivet i Uncategorized

Ignite session replay: Modernize applications with Windows Server 2016

Microsoft Windows Server 2016 was designed to support both traditional applications, as well as new cloud-native applications and DevOps workflows. Jeffrey Snover and Jeff Woolsey provide a great overview of the new options available today with the cloud-ready operating system:

  • The Nano Server configuration reduces the size of the OS deployment by 25x and dramatically reduces start times and reboots.
  • Windows Server Containers and enhancements to management and PowerShell create new options for modernizing and managing todays applications.
  • New architectures for packaging, repositories, configuration, operational testing, and secure operations to minimize the friction between Devs and Ops and maximize the velocity of code deployment.

Watch the Ignite session:

For these and nearly 70 other great sessions, visit our Windows Server Ignite sessions on-demand.

Source: Tech Net Microsoft

Udgivet i Announcements

How to dynamically update and manage reverse DNS/PTR records for your CloudSigma infrastructure

It is now possible to dynamically manage your PTR records on CloudSigma’s DNS servers.

This new functionality provides greater flexibility in setting up your applications in our cloud. We now accept dynamic updates of PTR records within our zones. This means that our dynamic DNS servers (DDNS) will accept updates directly from individual IP addresses from the networks used within our clouds.

It is very important to note that this feature allows a server with a given IP address to update only it’s own PTR record and also that the updates must be sent over TCP.

Usually the update of PTR records is a manual process where the user of a given IP address must send a request to the organization that manages the IP network, to add/remove/modify the PTR record for a given address. What we outline in this post is how you can create and update PTR records for yourself, without having to revert to us.

So let’s see how can you update the PTR records of an IP address you’re using within CloudSigma’s clouds.

In order to update our DDNS servers you’ll need to use a program called nsupdate – available for Linux/*BSD/Windows.

If you have the program already installed skip to the example section below and execute the commands shown (don’t forget to change the domain name according to your needs). If you don’t have the software installed, you need to install it first.

OS Package Installation Requirements

  • Windows – BIND
  • Fedora/RHEL/CentOS – bind-utils
  • Debian/Ubuntu – dnsutils

Package installations instructions for each OS

  1. Download Bind9 for Windows
  2. Expand the archive and run BINDInstall.exe
  3. Verify and change the target directory according to your preference
  4. Check the box Tools Only and uncheck all the other boxes
  5. Click Install
  6. On successful completion, click OK. Then click Exit

dnf -y install bind-utils
yum -y install bind-utils
apt-get update
apt-get install -y dnsutils

Managing your PTR records

Once nsupdate is installed you can move on to update the record.

Let’s say that you’re running a cloud server with an IP address of, which translates to domain name –

Here’s what you do to update its PTR record to :

nsupdate -v
update delete IN PTR
update add 86400 IN PTR

These commands will effectively:

  • delete the old reverse record for
  • add a new reverse record for with a TTL of 86400
  • send the command batch to the master DNS

Using nsupdate -v ensures the updates are sent over TCP which we require. Please note that each update must be made from the IP whose record you wish to update.

It you are using multiple IP addresses on the same network interface you may find it is not possible to successfully update your PTR records using the method above. If this is your case, please contact support and we’ll add the record manually for you. For the vast majority of customers this method works reliably and can be incorporated easily into automated deployment workflows to ensure PTR records are in place across a dynamic environment.

Happy computing and good luck with your reversing!



Udgivet i Automation, Blog Posts, ddns, Networking, ptr, rdns, Tutorials

Ignite session replay: New security features in Windows Server 2016

Cyberattacks are more sophisticated than ever before. Now you can use the OS as a new line of defense to help protect your organizations IT assets.

Windows Server 2016 provides layers of protection that help address both known and emerging threats resulting in a server that actively contributes to securing your infrastructure. The protections were built to mitigate an array of attack vectors and to deal with the overall threat of ongoing attacks inside the datacenter and range from enhanced detection and hardening to managing privileged identity and protecting virtual machines from a compromised fabric.

Check out this Ignite session to learn more:

For these and nearly 70 other great sessions, visit our Windows Server Ignite sessions on-demand.

Source: Tech Net Microsoft

Udgivet i Announcements

Are we stealing from you? Understanding CPU Steal Time in the Cloud

Customers often ask about CPU steal time, especially those that use the CPUs heavily and for whom it’s a key performance criteria. There are quite a few differences in the setup and behaviour of CPUs and cores between physical and virtual environments. Even between cloud providers there are setups differences that make like for like comparison on the face of things difficult. For this reason we thought it useful to provide a brief overview of our setup and CPU allocation logic for customers as well as to explain the most common sources of CPU steal time.

So firstly, for those unfamiliar with the concept, CPU steal time is the time that your virtual CPU within your cloud server has to wait for the real physical CPU while the hypervisor is busy using it for other things (like other virtual machines/cloud servers). This is a great article about the CPU steal time that’s well worth reading.

A Little Information on our CPU Set-up

The first thing to understand relates to the way cores are allocated between virtual machines on each physical compute node hosting your computing. CPUs and their cores at CloudSigma are shared. In other words, we do not pin a customer cloud server to specific cores. The CPU time is assigned by the physical compute node’s scheduler dynamically and everything is shared. We believe this has a number of benefits in delivering more reliable performance holistically by allowing the compute node to make sensible allocation adjustments on the fly to balance load.

In combination with this, we use Control Groups (cgroups for short) to guarantee enough CPU time for each of the cloud servers in line with the resources you have set via the server size. In the end, the scheduler decides what to do with any remaining resources and cgroups. It’s also worth noting that we reserve a set of specific cores to be outside the range of allocation for customer computing workloads. These cores are used to run the operating system of the physical host and in particular we reserve additional cores for processing of networking and storage operations. All of this is designed to increase stability of the overall machine and to deliver reliable performance levels over time independent of other customers’ load for you as a customer.

The Sources of CPU Steal Time in a Virtualized Environment

Unlike a physical environment, there are multiple sources and situations where you can experience CPU steal time as things are more complex in a multi-tenant virtualized environment. Not all of them are really a situation where you are not receiving the CPU time you should be, in fact in many cases you can often soak up spare CPU cycles beyond your allocated size but that’s not a situation where you’d see CPU steal time. The three most common situations are outlined below in more detail.

Your Cloud Server is Overloaded
It happens! Everyone wants to use as close to full capacity for what you are paying for however if the allocated CPU to your virtual cloud server is not enough to process the workload you can see CPU steal time as things backup and queue within the virtual CPU. If this is the CPU steal time root cause then the resolution is to resize the cloud server. If this is a temporary overload you can safely leave the things unchanged and you’ll see CPU steal time disappear when your load goes down.

The Physical Server Hosting your Cloud Server is Overloaded
The host is overloaded, in this case this is a failure from our side. It’s rare but it can happen. In this case we use live migration to migration without disruption virtual machines to other physical compute nodes to bring load levels back down to normal levels. Generally we maintain hosts well below full load so if you continue to observe this over an extended period please contact us and our free 24/7 support can check the physical host you are on. If it’s not overloaded then it’s unlikely the root cause of your CPU steal time.

You are Using a Smaller Virtual Core Size
At CloudSigma we give you the ability to define the virtual core size to take advantage of having for example more CPU threads of more smaller virtual cores for any given cloud server size. The cloud server within the operating system will always see the core size as the full physical size. So if the physical core is 2.6GHz but you set your VM to be 4GHz and two cores, each virtual core will be 2GHz. So you will always see steal time but in fact that’s because you are only being allocated a pro rata amount of the total core not the full size due to the virtual core sizing being smaller. As such you should always adjust any calculations of CPU steal time to take account smaller virtual core sizing if in fact you are using that. To avoid this, you can use the full core size per core by expanding the virtual core size to the full CPU core size (e.g. Intel v4 2.6GHz).


CPU steal time in the cloud is a bit more complex than traditional single tenant physical environments but it definitely still exists. The reporting of CPU steal time by operating systems hasn’t however adjusted for the different conditions so you can get false positives. When you find CPU steal time it does usually mean there is a resource constraint happening and we hope this post helps you to quickly identify the root cause and ensure continued smooth operations.

Happy computing!



Udgivet i Blog Posts, cloud management, CPU Steal Time, Deployment, Optimization, Performance, Tutorials

FBI warning: Ransomware attacks skyrocketing

Cybercriminals collected $209 million in the first three months of 2016 by extorting businesses and institutions to unlock computer servers.1 And that estimate is probably low, considering many companies fail to report such attacks for a variety of reasons. This type of crime has grown rapidly and is quickly becoming a favorite of attackers because it is so easy to execute. An attack like this on your business can have disastrous effects, many of which arent seen until after the ransom is paid.

What is Ransomware?

Simply put, its a type of malware that gets into a computer or server and encrypts files, making them inaccessible. The goal is to shut down your ability to do normal business. The attacker then demands a ransom for the key to unlock your data.

One recently publicized attack underscores how difficult it can be to decide what to do. An L.A.-area hospital was targeted and hundreds of patients lives were put at risk. The attackers achieved their infiltration through a simple targeted phishing email and one click of an attachment locked up the hospitals medical records. They had very little recourse and ended up paying $17,000 to the attackers for the key to their own data2. In this case, paying the ransom was an easy choice with real health concerns in the mix, but thats also what made them an ideal target. If you get hit with a ransomware attack, your organization will have an extremely difficult decision to make.

Neither is ideal.

Choice A: Pay the ransom

This is certainly the easiest way to get back up and running, but it only increases the likelihood youll be attacked again. Additionally, you are funneling money to organized crime or potentially even terror organizations. In some cases, companies paid the ransom only to have the attackers ask for more.

Choice B: Work to recover your systems

If you choose not to pay the ransom, youll need to recover the locked data yourself. If you do not have a clear recovery protocol in place, then you may have to deal with being locked out of your data and systems for a while. That forces you to weigh the impact on your business against the ransom ask, which is exactly what they want.

FBI guidelines: How to protect your company3

While ransomware attacks may have spiked, the tactics for preventing them are not new. Its the same for all types of malware. Educate your employees on proper email protocol. Keep hardware and software patched and up-to-date, especially on your endpoints. And manage the access of your privileged accounts.

That said, like malware, its nearly impossible to stop everything. Per the FBI, your best defense against this type of attack is having a strong backup policy. Not just backup. Backup Policy. That means you:

  • Regularly back up data. This is the simplest and most effective way to recover critical data.
  • Secure your backups. That means storing them somewhere that is not connected to the original data, such as in the cloud or physically offline.
  • Run recovery drills. The only way to know for sure if your system will work is to test it in real-life situations.

To us, this just further underscores the need to have a strong recovery plan that includes backup and disaster recovery (DR). Many companies, once they have a DR solution in place, are choosing to use less and less backup to save costs. The problem is, while incredibly useful, disaster recovery faithfully replicates your current environment. If that environment is compromised, so is your DR.

When you have a solution like Operations Management Suite and Azure Backup, you dont need to take that risk. Azure Backup gives an extremely cost-effective and secure way to store your backups in the cloud. It preserves recovery points for up to three days, giving you a way to restore quickly after an attack is discovered, and tools like two-factor authentication and deferred delete prevent destructive operations against your backups. Its a few simple steps that could save you from a disastrous attack.

Learn more

See how integrated cloud backup and disaster recovery provide you with greater security on our Protection and Recovery page.

Free trial

Try Operations Management Suite for yourself and see how it can give you increased visibility and control across your entire hybrid environment. Get your free trial >
1Fitzpatrick, David, and Griffin, Drew. Cyber-extortion losses skyrocket, says FBI. CNN Money. 2016.

2Staff Report. LA Hospital Paid 17K Ransom to Hackers of Its Computer Network. NBC Los Angeles. 2016.

3FBI Public Service Announcement. Ransomware Victims Urged To Report Infections To Federal Law Enforcement. September, 2016.

Source: Tech Net Microsoft

Udgivet i Cybersecurity, Hybrid Cloud, OMS, Ransomware, Security

10 reasons you’ll love Windows Server 2016

Windows Server 2016 is the cloud-ready operating system built to support your current workloads and allow you to transition to the cloud. Weve taken all of our learnings from Azure and built them right in, packing it full of exciting new innovations and features. Here are ten we think youll love:


1. Control the keys to your sensitive data.

Improve security by limiting access to the IT environment with Just Enough and Just-In-Time Administration. Control who gets access to what and for how long. With Windows Server 2016, you can define which keys each admin has access to, even setting temporary permissions.


2. Manage your servers from anywhere, even your mobile device.

Windows Server 2016 has a new toolset hosted in the cloud called Server Management Tools. Its a web-based remote GUI that allows you to manage your serversphysical or virtual, datacenter or cloudfrom just about anywhere, even your mobile device.


3. Deploy servers in an exact configuration and keep them that way.

Automate tasks and manage settings to set up and keep servers configured properly. Weve enhanced PowerShell Desired State Configuration and given you the ability to define, deploy, and manage your software environment using a single console. Weve also added elements of open source software that make it easier to test your code.


4. Easily handle the 9:00 A.M. logon storm.

Weve improved and strengthened our Remote Desktop Services platform allowing partners to build secure, customized apps. Graphic improvements have increased compatibility and performance across the board.


5. Do-it-yourself storage.

Software-defined storage used to be exclusive to storage industry vendors. With Windows Server Storage Spaces Direct, weve included all the features you traditionally expect, directly in the operating system. This means greater performance, without the premium cost.


6. Upgrade without the downtime.

Rolling Cluster Upgrades and Mixed Mode Cluster allow you to upgrade and manage your servers without taking them down. This is designed to help reduce the impact of management operations on your workload.


7. Click, click, donejust like in Azure.

Windows Server 2016 Software-Defined Networking is based on clear and concise policy management, cutting the time spent on infrastructure. This Azure-inspired network virtualization feature gives you the centralized control to configure network resources. Deploy new workloads more quickly and use network segmentation to increase security.


8. Move beyond passive security.

Traditional, passive perimeter security is becoming less and less effective. Once someone bypasses your wall, they are free to do whatever they want. With Windows Server 2016, that is no longer true. Add new layers of security to your environment to control privileged access, protect virtual machines, and harden the platform against emerging threats.


9. Use Containers to streamline app deployment from keyboard to production.

We are excited to announce that Containers are now built into Windows Server 2016, helping to accelerate your app deployment. Use Containers to streamline existing apps and to create new microservices, whether on-premises or in any cloud.


10. A super small server that packs a big punch.

Nano Server is a new deployment option for Windows Server 2016 that loads an image 25x smaller than the Windows Server 2016 with a desktop. It brings only the elements that the specific workload needs, resulting in faster boot times and simpler operations.

BONUS Move to the cloud for less using your existing licenses

With the Azure Hybrid Use Benefit, you can use on-premises Windows Server licenses that include Software Assurance to earn special pricing for new Windows Server virtual machines in Azurewhether youre moving a few workloads or your entire datacenter. Start saving now>

Ready to dive deeper?

This e-book will tell you everything you want to know about Windows Server 2016.

Get the Ultimate Guide to Windows Server 2016.

Source: Tech Net Microsoft

Udgivet i Uncategorized

Announcing the launch of Microsoft BizTalk Server 2016

Today, we are announcing the release of Microsoft BizTalk Server 2016. This marks the tenth major release for the product that has been in the market serving various application integration customer needs for the past 15 years. This release not only highlights some key on-premises application integration capabilities that help customers automate mission critical business processes, but also showcases our strong commitment to the hybrid integration platform.

We realize customers have different business needs In addition to running workloads on-premises, many business want to run some workloads and applications in the cloud. Our goal is not only to provide flexibility and agility to our customers, but also provide a consistent experience whether you are looking to integrate applications, data and processes across on-premises or the cloud. With the release of BizTalk Server 2016, customers can seamlessly connect to cloud applications through Azure Logic Apps. Customers can now connect to SaaS applications faster, enable enterprise cloud messaging across vendors and partners, and take advantage of first-class integration with Azure services including Azure Functions, Machine Learning and Cognitive Services via the Logic Apps adapter all from the comfort of using BizTalk Server 2016.

To learn more about why our customers want to upgrade to BizTalk Server 2016 and hear our customer comments about the new release, please check out Frank Weigel’s recent Azure blog post.

Source: Tech Net Microsoft

Udgivet i Announcements