Content Delivery Networks in Windows Azure
Are you trying to understand what the CDN is and how can you use it? For me I think of a CDN as a DFS in Azure. I think many of us traditional IT folks look for ways to relate on-prem IT and our understanding of it to Cloud based IT.
When we think of our On-Prem server environment where by we want to have a single repository of data that is then replicated round to other servers on separate network sections. Just like the DFS distributes content to separate locations the CDN does exactly the same thing.
So can I place all content into a CDN? Well not quite. The CDN really is for static content, content that does not change too much. If you use the CDN with Azure websites its possible to take images. Azure can only cache data that is tagged as publicly cacheable, so you need to change the cache-control HTTP header.
Infront's own Richard Green wrote a great guide on CDN's that I was a reviewer on.
You can download the guide here
Please download the guide (and give him a good rating)
PK
This blog is about my ramblings and findings working with System Center 2012
Search This Blog
Monday, April 14, 2014
Friday, March 28, 2014
Microsoft Cloud OS Online training with the MVP group.
The MVP Cloud OS Roadshow continues to inspire IT Professionals and Developers to explore the Cloud OS vision!
What started in the UK has grown to encompass more than 20 countries including Central and Eastern Europe, Germany, Spain, the Netherlands and North America. Each MVP led event delivers a series of real-world scenarios that demonstrate how to integrate the Cloud OS into the constantly changing landscape of IT Professionals and Developers.
MVPs continue to share their Cloud OS expertise and real world experience, helping businesses to think differently about their Cloud solutions.
http://blogs.msdn.com/b/mvpawardprogram/archive/2014/03/27/mvps-host-global-cloud-os-roadshows.aspx
The MVP Cloud OS Roadshow continues to inspire IT Professionals and Developers to explore the Cloud OS vision!
What started in the UK has grown to encompass more than 20 countries including Central and Eastern Europe, Germany, Spain, the Netherlands and North America. Each MVP led event delivers a series of real-world scenarios that demonstrate how to integrate the Cloud OS into the constantly changing landscape of IT Professionals and Developers.
MVPs continue to share their Cloud OS expertise and real world experience, helping businesses to think differently about their Cloud solutions.
http://blogs.msdn.com/b/mvpawardprogram/archive/2014/03/27/mvps-host-global-cloud-os-roadshows.aspx
Thursday, March 27, 2014
Configuring a Windows Azure SQL Sync Group
Richard Green, Azure Ninja wrote a great guide on SQL in AZURE
In this guide, I'm going to walk you through the process of setting up Windows Azure SQL Sync between two SQL Azure databases. This technology using Windows Azure SQL allows you to replicate databases either between other instances of Azure SQL databases or with on-premise SQL Server databases.
Richard J Green works as a Senior Technical Consultant for Infront Consulting specializing in delivering System Center solutions helping customers leverage their investment in IT with Microsoft technologies. Richard works extensively with Windows Azure and System Center.
Richard J Green works as a Senior Technical Consultant for Infront Consulting specializing in delivering System Center solutions helping customers leverage their investment in IT with Microsoft technologies. Richard works extensively with Windows Azure and System Center.
You can follow more from Richard here
The operating system reported error 340: The
supplied kernel information version is invalid.
SCCM Distribution Manager is not processing Applications.
So I had
a an issue with SCCM for the last few days whereby packages were failing to
process.
I was
getting an error that the SCCM services had no permission to the folder share.
When I was looking at the Status MSG's it gave this supper obscure msg...
It claims that the SCCM server does not have permissions to the share.
Now I checked that the 2 servers had full access to the folder and share so this just didn't make sense
I went to the DIST manager log on the PS server and it told me this
failed to create instance of IRdcLibrary
Well that's the issue right there
Remote Differential Compression is either not installed or not functioning on the primary site server. As far as I know the PS server needs to use RDC to make a new file for the content library on the DP's. In my case the PS server does not have any DP on it and I can only assume that someone removed RDP not knowing that it was the PS server that takes the content use RDC and insert the new package into the content library on a DP.
Anyway got RDP back on the server and all my packages started to process.
And my packet status goes green
Thursday, March 13, 2014
SCOM 2012 HA OPTIONS
SCOM 2012 WITH SERVER 2012 R2 & SQL 2012 SP1
Deploying
SCOM with the differeNt Ha options
SCOM 2012 WITH SERVER 2012 R2 & SQL 2012 SP1
As a full time System Center consultant I am often asked about
deploying products in the System Center suite in a multi-site global
deployment. The application that comes
up top of the list is SCOM. Companies want to monitor data centers across the
globe with SCOM deployments that are 100% Highly Available (HA). Once we look at multi-site SCOM deployments
we are going to naturally incur additional costs and complexity. In this guide we will look at a diagram that
looks at the components and shows the flow between each component. We will then examine each configuration,
looking at the pros and cons.
Deploying
SCOM with the differeNt Ha options
So in this section, we are going to walk
through the different options for installing SCOM in a multi-site environment
and what considerations you may need to take into account in designing and
implementing your SQL Server or servers.
SCOM is a great product in the System Center suite to discuss because a
lot of companies require multi-site monitoring and want to have the ability to
have a HA SCOM whereby if they lose a primary data center they want monitoring
to failover to a secondary datacenter or DR site.
We are going to start off with the most
basic SCOM deployment and work our way up from there. If you are a seasoned
SCOM pro please excuse the basic nature of this, but it will be helpful for
others to understand what it’s all building on.
Just in case some of the common abbreviations
are not familiar to you
MG Management Group, the
security realm between the MS and the SQL DB
MS Management Server, the
server that has a writable connection to the SQL DB
DB Data Base, the SCOM,
monitoring and reporting databases that are hosted by SQL
GW Gateway, the SCOM, role
that is used in remote locations to forward to a MS
MP Management Pack, XML
document that holds discoveries, rules, monitors and reports
|
SQL
Licensing
Licensing is always a complex issue with System Center, and it
doesn’t get easier with SQL, that being said I have been told from several
sources that the no cost for SQL standard also applies for clustered
instances of SQL standard only being used to house System Center DB’s. It
was also confirmed to the MVP group that you can deploy SharePoint where
it’s only purpose if to house System Center dashboards and there is no
licensing requirement
|
Firstly we need a management server and a SQL Server to host the
SCOM DB. A server we want to monitor, has a SCOM agent loaded on it, and it
sends its monitoring data to the management server and the management server in
turns writes that data to data bases on a SQL Server. If we deploy SQL standard and it is only
running to support System Center then there is no cost for the SQL
license.
New SQL 2012 guide for System Center.
Some time ago I wrote a guide for deploying System Center onto SQL and what you may need to consider.
The guide came around from a heated discussion between myself and a DBA at a big company who was not happy with a SQL configuration I made.
When I looked at the data out there it was very difficult to get a clear picture on the full gambit of System Center products and how they would interact with each other.
The last guide was written before the R2 release of System Center 2012 and so a lot of the new SQL 2012 features were not supported. I know it took a long time to get this guide together but its nearly 200 pages of content. The people involved with this guide were Pete Zerger did Azure, Robert Hedblom did DPM, Matthew Long did the SQL MP chapter, Craig Taylor did the SQL VMM template section, Richard Green and Craig worked with me on the general editing and work on the cluster builds etc.
.
You can get the guide here
Some time ago I wrote a guide for deploying System Center onto SQL and what you may need to consider.
The guide came around from a heated discussion between myself and a DBA at a big company who was not happy with a SQL configuration I made.
When I looked at the data out there it was very difficult to get a clear picture on the full gambit of System Center products and how they would interact with each other.
The last guide was written before the R2 release of System Center 2012 and so a lot of the new SQL 2012 features were not supported. I know it took a long time to get this guide together but its nearly 200 pages of content. The people involved with this guide were Pete Zerger did Azure, Robert Hedblom did DPM, Matthew Long did the SQL MP chapter, Craig Taylor did the SQL VMM template section, Richard Green and Craig worked with me on the general editing and work on the cluster builds etc.
.
You can get the guide here
Monday, May 20, 2013
.Net3.5 and Server 2012
I have tried several ways to get the .Net Framework 3.5 in stalled on Server 2012 however the only way that ever works every time and is supper fast is powershell.
On the Server 2012 media is a folder called sources\sxs so from an elevated command window I run
dism /online /enable-feature /feature:netfx3 /all /limitaccess /source:f:\sources\sxs as per the screenshot below. F:\SOURCES\SXS is just whereever you have the SXS folder located.
Now I was working on a project with installs around the world and it was proving hard to get the media to all the different sites and I was wondering if copying just the .net3.5 cabs from that SXS folder would do the job as its only 18Mb but alas it didn't work and I had to go back to the full folder. What I now do is to load the SXS on a network share and just use the network path in a powershell script.
On the Server 2012 media is a folder called sources\sxs so from an elevated command window I run
dism /online /enable-feature /feature:netfx3 /all /limitaccess /source:f:\sources\sxs as per the screenshot below. F:\SOURCES\SXS is just whereever you have the SXS folder located.
Now I was working on a project with installs around the world and it was proving hard to get the media to all the different sites and I was wondering if copying just the .net3.5 cabs from that SXS folder would do the job as its only 18Mb but alas it didn't work and I had to go back to the full folder. What I now do is to load the SXS on a network share and just use the network path in a powershell script.
SQL Permissions Error on setup (The SQL service account login and password is not valid)
I was installing several SQL instances for a SCCM pre-prod environment that was going to mimic a production build but on a much smaller basis. I was using a specific AD account and it had to span 3 domains in the same forest. In the first domain (where the account existed) I had no issues. Once I went to the other 2 domains I kept getting the same 2 errors, namely
"The SQL Server service account login or password is not valid"
Or
"The credentials you provided for the Reporting Service are invalid"
I make as many mistakes as the next so I wrote down the username and password on notepad and then opened power shell as that user (so I deffo knew I was using the correct username and password)
I am crap at reading past error messages but the second error caught my eye
Now you can see that I have all the services changed to domain accounts, except for the Reporting Service
To change the SSRS you need to logon to the SSRS Configuration Manager, before we can change the account to a domain account we need to backup the encryption key.
Now when I look at the SSRS page there is a domain account and not a built in account
There are 2 last points here, if you are installing SCCM you must use network accounts you cannot use the build in SQL accounts and if you are running these services under a domain account then you will need to register the SPN.
"The SQL Server service account login or password is not valid"
Or
"The credentials you provided for the Reporting Service are invalid"
I make as many mistakes as the next so I wrote down the username and password on notepad and then opened power shell as that user (so I deffo knew I was using the correct username and password)
I am crap at reading past error messages but the second error caught my eye
So it is telling me here that if I have an issue with the domain account I can go and change that in SQL Configuration Manager. So I can perform my install with the standard built-in accounts and then change it when the install is done. But the really important thing here is that you can't go to services.msc and change the account that any of the SQL services run under as this will not allow SQL to properly configure the services. One of the things that the SQL configuration does is grant the user the right to logon as a service. (that might have been my issue but it still does not make sense when SQL configuration manager can make the changes)
When you jump onto SQL Configuration services and select the SQL services and change the user account from the built in account to the domain account you want.
Now you can see that I have all the services changed to domain accounts, except for the Reporting Service
To change the SSRS you need to logon to the SSRS Configuration Manager, before we can change the account to a domain account we need to backup the encryption key.
Now when I look at the SSRS page there is a domain account and not a built in account
And the same can be seen on the SQL configuration manager page
To CAS or not to CAS....
I have been designing and deploying configuration manager for 12 years. In recent years I have been lucky enough to be working on some large scale SCCM projects. So what is large scale to me? over 40,000 seats goes into the "large scale" category for me. The CAS is nothing new to SCCM in 2007 we had the idea of the central admin site and in many ways it acted in the same way as the CAS. The CAS or Central Admin Server is basically a central admin location where you can manager multiple primary sites in one console.
When SCCM 2012 was released the CAS had to be installed during the initial install phase and as a result a lot of people deployed a CAS as a just in case design. The CAS then received a lot of negative press because it implemented a layer of complexity that many didn't understand. I am going to cover the physical things to do when deploying a CAS in another blog but in this blog we are just going to look at some of the design decisions you may go through when choosing to deploy a CAS.
A single Primary Site (PS) can support up to 100k clients and I have never deployed a SCCM environment with 100k seats but have installed a CAS 3 times so was I wrong to do that? So let me give you a profile of 1 install. The customer was a retail customer with a total of about 80k seats. The client was expecting about 5% growth over the next 3 years and they have a support goal of never going past 80% of the Microsoft supportable limit and so we needed a CAS to support more than 1 PS.
My next install was for 45k and that is well within the 100k limit with a growth rate of about 15% over the next 3 years, with this deployment we still went for a CAS and 3 PS servers. So if the CAS includes more complexity and more databases and servers then why chose a CAS?
It came down to the following points;
If I have a single PS server supporting say 50k clients and it goes down then we can add no new software, clients or OS images. No new policies can sent to the clients and this means its a pretty high single point of failure for a large org.
In this example we have 1 CAS and 3 PS, the 3 PS were in the US, EU and APAC. With this design we could loose the CAS and 2 PS servers and still have no issues with the last PS working.
There is no doubt that with the changes to SCCM SP1 and the ability to add a CAS after installing the first PS that for smaller sites its going to be more likely that more smaller sites will not go for a CAS to begin with unless you really need one, and who should decide that ... well you.
In SP1 there are also some good changes to how we can view what is happening from a site replication standpoint from within the monitoring component.
When SCCM 2012 was released the CAS had to be installed during the initial install phase and as a result a lot of people deployed a CAS as a just in case design. The CAS then received a lot of negative press because it implemented a layer of complexity that many didn't understand. I am going to cover the physical things to do when deploying a CAS in another blog but in this blog we are just going to look at some of the design decisions you may go through when choosing to deploy a CAS.
A single Primary Site (PS) can support up to 100k clients and I have never deployed a SCCM environment with 100k seats but have installed a CAS 3 times so was I wrong to do that? So let me give you a profile of 1 install. The customer was a retail customer with a total of about 80k seats. The client was expecting about 5% growth over the next 3 years and they have a support goal of never going past 80% of the Microsoft supportable limit and so we needed a CAS to support more than 1 PS.
My next install was for 45k and that is well within the 100k limit with a growth rate of about 15% over the next 3 years, with this deployment we still went for a CAS and 3 PS servers. So if the CAS includes more complexity and more databases and servers then why chose a CAS?
It came down to the following points;
If I have a single PS server supporting say 50k clients and it goes down then we can add no new software, clients or OS images. No new policies can sent to the clients and this means its a pretty high single point of failure for a large org.
In this example we have 1 CAS and 3 PS, the 3 PS were in the US, EU and APAC. With this design we could loose the CAS and 2 PS servers and still have no issues with the last PS working.
There is no doubt that with the changes to SCCM SP1 and the ability to add a CAS after installing the first PS that for smaller sites its going to be more likely that more smaller sites will not go for a CAS to begin with unless you really need one, and who should decide that ... well you.
In SP1 there are also some good changes to how we can view what is happening from a site replication standpoint from within the monitoring component.
Wednesday, March 6, 2013
SCCM 2012 SP1 Pre-Stage Content on a DP
In this post we will look at how you can pre-stage content on a DP. As per all the other videos they need to be watched on the you tube setting of HD. as per the clip below
User Device affinity Video
This blog is a video blog on User Device Affinity in SCCM 2012. UDA is the ability to link a machine to a user. When we link the user account to the computer account we can target software to the user yet not have that software apply to every machine they log on to.
All of these videos are supposed to be watched on HD.
Tuesday, March 5, 2013
Host Agent Fail Error ID:26263 Adding new storage
In a LAB environment
its not that easy to get access to enterprise storage and as such it sometimes
takes a bit of trial and error to get the SMI-S provider working as we cant
practice it in a LAB. I was recently working
with a EMC VNX 5300 SAN and so here is what I did to get the provider
working. (I will add that I got a bit of
help from EMC support but I would not have got it going with the information on
TechNet)
No matter what I did I kept getting the following error when importing my storage. There was almost no information on the web except other people with the same error.
Lets firstly understand what is the point of this SMI-S provider.
Lets start off with
the SMI-S provider and what it does.
With a large disparate amount of storage manufactures all with different
ways to communicate with there storage devices it difficult for other vendors
to make products that can manage or monitor storage devices through one
interface. The SMI-S therefor is an
independent group that produce a vendor neutral solution for managing storage
providers. So how do they
communicate? Well when you install the
SMS-S provider it essentially creates a HTTP server that can parse XML so that
for example when an administrator in the VMM console requests a LUN for new
storage for a cloud then that request will be sent to the provider and in turn
it will translate that request into the creation of a new storage LUN on the
SAN. If you try and consider the impact
of that we can easily have a configuration whereby the VMM admin can control
Storage, Hypervisor and Network components in one place. That’s a really powerfully admin position in
terms of reducing the admin overhead .
Have a look at what Microsoft wrote about it on TechNet. http://blogs.technet.com/b/filecab/archive/2012/06/25/introduction-to-smi-s.aspx
Before we get into the storage add here is a diagram to explain in overview what you are supposed to be doing.
Firstly you will need to get the SMI-S provider
from EMC
The one I used was
se7507-Windows-x64-SMI
Then follow through with the install as shown below, but be sure and install the provider on something like one of your Hyper-V clusters and not on the VMM server. You can install it multiple times as its just essentially brokering a connection to your SAN and your SAN doesn't care it there is another one installed.
Once you have the
install done there are two command sets we need to run and they are all in
%Program Files\EMC, then they two areas you will need to go to are
For Authorization
%Program
Files\EMC\SYMCLI\bin
And to check the
deployment
%Program
Files\EMC\ECIM\ECOM\bin
I am assuming that
you are running this on Server 2012 and so you are going to need to open any
CMD secessions from an elevated command prompt. 10.201.0.83
Enter the command
that I have here replacing the host IP address with your own.
If you get the
command wrong it will just prompt you to let you know that you have entered the
command incorrectly as show below
Then
navigate to the %programfiles%\emc\ecim\ecom\bin and
enter the Disco command. Now its
important here to understand that there is a bit of a design fault with the way
EMC has designed this as the only account that works for setting up the
connection is the account that has full admin access to the storage
device. Lets hope in the near future
this is changed.
Now we
are going to change to the second folder set and verify the SMI-S
connection. In the CMD screen below if I
have not entered a value then I just accepted the defaults.
The last thing that
needs to be done now is enter "Disco" to discover your storage
provider. As you can see from the value
below 0 means you are connecting OK.
Now that we have the SMI-S provider talking to the SAN we can go to VMM and add in the storage array.
Now lets
go to VMM and get this storage in.
At a high
level here is what is needed
- Create a RunAs account
- Add the storage provider (its supper important to understand that it’s the host where you have installed the provider on and not you VMM server)
- Once the storage is added you can create LUN's
- The new LUN'S needed to be added to your clustered in VMM
To begin the process we need a RunAs account. I used my generic admin
account as per the steps above. As this is not a domain account you do
not want to validate the domain creds.
I added the storage by its network name, and to do this I had to add
that name to DNS, the protocol is SMI-S CIMXML and the port number is
what was specified in your provider test in the previous section, this
is usually 5988
The storage array gets discovered and you can verify that the provider is working OK.
I get a conformation of the storage addition and the provider address.
Now when I go to create the LUN I can do so from within VMM. In this case I am using thin provisioning on the LUN as there is now need to commit all the disk right from the start.
You can now go to the fabric in VMM and add the storage
You can now see that there are 3 LUNs being made available to the cluster.
Following these steps I was able to add the storage and manage the LUN through VMM. When I saw this all working the way its supposed I was really impressed with the working reality of this product.
Just one last note, there is a way to test the SMI-S provider and its a source forge download that you can get here.
Just one last note, there is a way to test the SMI-S provider and its a source forge download that you can get here.
PK
Subscribe to:
Comments (Atom)












































