The Content Database Support and Remote BLOB Storage Myth

There’s a popular myth that keeps popping up that I wanted to post about.

Why is it so popular?
Well, because it seems intuitive if you aren’t working with SharePoint on a regular basis. If you are then I’m sure you don’t think this… and if you did, well shortly you’ll know the truth.

So here’s the myth
“We don’t need to split our content across separate content databases because if we need more than 200GB support for each database we will [1] move subsites around to different site collections in different databases or [2] use remote blob storage and put it all on file shares… then we’ll have a very small content database size.”

Why is this a myth?
Let’s address the second part of the statement first – “[2] use remote blob storage and put it all on file shares… then we’ll have a very small content database size”. This is a myth because the content database will still not be supported by Microsoft. The reason for this is that both the actual database size itself PLUS the content offloaded and stored on file shares count in the 200GB (or 4TB if you meet additional requirements). This means that even if you had a 1GB database and 225GB offloaded onto fileshares for this content database, then you’re actually at 226GB and therefore not supported if you do not meet these requirements. If you do meet the requirements and have a 1GB database with 4.5TB offloaded onto fileshares for a specific content database, now you’re at 5.5TB of content and again, not supported.
From http://technet.microsoft.com/en-us/library/cc262787.aspx: “If you are using Remote BLOB Storage (RBS), the total volume of remote BLOB storage and metadata in the content database must not exceed this limit.”

Now let’s address the first part of the statement “we will [1] move subsites around to different site collections in different databases”. This also is a myth because although this is doable, it doesn’t make it a good idea.. Do you remember that old Chris Rock line.. “You can drive your car with your feet if you want to, but that don’t make it a good, [expletive] idea?” Yes? Well that’s the same here. So why isn’t it a good idea? In this case it’s because subsites are contained within a site collection. There is a close relationship between a site collection and its subsites. Objects such as site columns and content types are associated with and shared by subsites. If you want to move an individual subsite, then you have to consider how you are going to move these shared objects as well – and this is where it gets tricky. This is because there are a number of objects are difficult to move – for example workflow history and approval field values. Even if you investigate using third party tools to perform the move your subsites, you will likely encounter issues.

Ok I get it, but what do you recommend… what is the fix?
Essentially what needs to happen is that the architecture of the SharePoint environment should be considered carefully up front as much as possible, in conjunction with:

In case you are thinking, well, surely Microsoft should just raise their support limits even higher, or that subsites should be able to be moved around in a more full fidelity manner. I understand this point and was guilty thinking this myself when I first started working with SharePoint. As time grew on though, I also asked myself.. Is there any other product that offers all of the functionality that SharePoint does and has comparable supportability limits? Well frankly I couldn’t think of any. Besides, given that there is full transparency with the supportability limits and a wealth of information on TechNet to make it clear (at least to IT pros) what to do and what not to, I’m happy with this, at least for now.

Architectural Mistakes to Avoid #1 – Interstate Stretched Farm

In discussions with IT Pro’s at client sites, a few times I have seen them start off designing their farm to handle performance requirements for interstate users (e.g. Brisbane, Sydney, Melbourne) by having the core of the farm in Sydney, and then one web front end in Brisbane and another in Melbourne. Essentially an architecture that looks like this:

SP Architecture - Unsupported

What’s the challenge here?

The challenge is that technically it won’t be supported by Microsoft, because what has essentially been created here is a stretched farm, that has a packet latency of > 1ms between the WFEs (W), App Servers (A) and SQL Servers (S).  So why isn’t an environment like this supported? Because it will cause performance problems, as all the internal farm servers need to communicate with one another quickly. To get an idea for how significant the performance will be degraded, the typical statistic quoted is 50% per 1ms delay, ouch!

Also, occasionally I have heard the statement that, yes, it is possible to ping Sydney to Melbourne in < 1ms. Well, with the help of Physics 101 we can prove that this cannot be the case. Enter Wolfram Alpha to save us some time – let’s check how long it would take for a beam of light to travel from Sydney to Melbourne (just in one direction, not bouncing back again):

WolframAlpha

2.38ms. How about light being sent through fibre? 3.34ms. What does this mean? In the absolute optimal case, it would take at least 3.34ms for data to be sent from Sydney to Melbourne. But not really – because there is of course routing overhead and network congestion. And this is why an interstate stretched farm such as this cannot be supported by Microsoft.

So how do we fix the supportability issue?

To get the farm back into a usable (and supported state) we basically need to drop the idea of the web front end in Brisbane and Melbourne.  Then all requests for users in Brisbane and Melbourne are routed through Sydney.

SP Architecture - Supported

The other solution here, if you really must stretch the farm across data centres (usually for cheap(er) and simple(r) Disaster Recovery) is to ensure that the data centres are in the same city – e.g. Sydney CBD to Mascot.  Note that this doesn’t address the original concern though – improving performance for interstate users.

How do we improve the performance for interstate users in a publishing (e.g. intranet / public website) scenario?

If you’re having performance issues where users in Brisbane and Melbourne are performing heavy reads of content and few writes – e.g. in an intranet scenario, then you’ll want to ensure that you are using SharePoint Publishing Cache aggressively.  This will give you a dramatic performance boost because SharePoint won’t be fetching data out of SQL constantly and then trying to render it.  Users will just get a straight HTML dump of pages.

How do we improve the performance for interstate users in a collaboration scenario?

The most popular solution employed here is to use Wan Optimization (WanOp) devices such as those made by RiverBed and SilverPeak.  These devices have the ability to not only cache data/content, at each branch (i.e. Brisbane and Melbourne) but also perform compression and de-duplication techniques to minimize the number of bytes actually sent to the client.  Note that these capabilities are required other than just simple caching of the data, because in a collaboration scenario, the content is typically changing regularly.

Of course, from Windows 7 and Windows 8 client machines also have Microsoft BranchCache built-in which provides similar capabilities to the WanOp devices, though it does have limitations (e.g. it only works with Windows devices).  Here are some further details on BranchCache:

  • http://technet.microsoft.com/en-au/network/dd425028.aspx
  • http://www.enterprisenetworkingplanet.com/windows/article.php/3896131/Simplify-Windows-WAN-Optimization-With-BranchCache.htm

Of course, the overall number of servers and specifications needs to be determined during the SharePoint infrastructure design process (e.g. in the above diagram for a reasonably sized office it would be wise to add at least one more WFE for performance and high availability), however hopefully I’ve at least shown you one critical design mistake to avoid.

Runas – With an Account from Another Domain

Ever wanted to be able to access a client’s backend data source using Windows Authentication, but your machine wasn’t a member of their domain?

For example – you might be working with a client that can’t provide you with a client machine that has Excel or InfoPath, but you need to access data in SQL.

The usual problem I’ve run into is this – If I’m using my local Excel or InfoPath client, when attempting to add a new data connection, you get prompted with a screen such as that below.  Selecting Windows Authentication will use your local machine credentials  and won’t work.  The Username and Password box as we all know are for SQL authentication.

 Logon

So how can we specify a username and password for an account on another domain? Well one would expect that if you’re VPN’d into the client network, a regular runas should work.  It doesn’t though, because the machine isn’t part of the domain:

 Runas Error

Well, recently I found switch that does allow you to run an application as a user from another domain, and it seems to be fairly well hidden so thought I’d share it.

If you add the switch /netonly to the start of the runas command, the application will run, and the credentials will be passed through when needed.

runas /netonly /user:clientdomain\accountname “C:\Program Files (x86)\Microsoft Office\Office14\Excel.exe”

Naturally you need to be able to access to the data source in the first place (e.g. VPN’d into the network and have a valid account) for this to work.  Pretty cool though.

FAST Search for SharePoint 2010 – Indexing Database Content – Guidance

If you are doing any work with FAST Search for SharePoint 2010 and need to index database content (e.g. SQL tables), as a general rule of thumb you should use BCS for this.

FS4SP does have a JDBC connector that is quite capable of indexing database content, though don’t just use this because you have FAST and think you need to.

The reason here is simple – you will have a much simpler migration experience to SharePoint 2013, as the JDBC connector is now no longer included.

Unsupported Installation Scenarios on SP2013

Understanding the scenarios in which SharePoint is not supported are extremely important when designing SharePoint farms; as if you experience any trouble with your environment and need to get Microsoft support involved, they typically won’t be able to help you, and will instead ask you to get your environment in a supported state.

On SP2013 it is important to note that these scenarios are not supported:

  • Installation on a machine that is not joined to a domain (i.e. a machine in a workgroup)
  • Installation on a Virtual Machine (VM) that uses Dynamic Memory
  • Installation on a Domain Controller (only supported in development environments – not production)
  • Installation on Resilient File System (ReFS). ReFS is a new file system built into Windows 8 that is designed to work as advertised (be more resilient to common errors that would cause corruption or availability issues). Only NTFS is supported for SP2013 at the moment.
  • Installation on a Windows Web Server

Here’s the link to the Original support article.

Can’t Delete List Field?

Recently I had to help someone that wasn’t able to delete a field off a list!

Seemed a bit strange I thought.. just go and delete it 🙂

Well in this case there was a catch – it was sealed. One of their developers had deployed a custom feature containing list fields and had set this particular one to sealed.

This was reasonably easy to figure out by opening up SharePoint Manager 2010 (SPM2010).

By using SPM2010 you can easily navigate the site structure – down to the individual list and then see each of its fields.

If you then click on a field you can view all of its properties – and in SPM2010 you will see a ‘CanBeDeleted’ property.

If you just scroll down to the ‘Sealed’ property and set it to False, then save your changes – you will now be able to delete it.

Fixing A Broken Page – Tip #1

If you add a web part to a page and it wasn’t written correctly you may experience an issue where the whole page doesn’t load.

When this happens, the simplest way to solve the issue is to perform the following steps:

  1. Add the string ?contents=1 to the URL of the broken page – e.g. http://intranet/pages/default.aspx?contents=1
  2. This will take you to the Web Part Maintenance page.  Once the page has loaded, look for the web part that you think is causing the error.  In most cases, it will actually display Error in the web part name.
  3. Check the box next to the web part and click the Delete button
  4. Navigate back to the page

Fixing A Broken SharePoint Site

If you’re ever working with a site that just won’t load and you don’t know it’s structure, you can (if it isn’t severely busted) still attempt to access other pages by manually navigating to the “View All Site Content” page, by adding  _layouts/viewlsts.aspx to the end of the site URL.

E.g. http://intranet/_layouts/viewlsts.aspx

If you still have troubles with the site, then you can attempt to navigate to the site settings page and modify the settings that might be causing the issue by adding _layouts/viewlsts.aspx to the end of the site URL:

E.g. http://intranet/_layouts/settings.aspx

How To Remove InfoPath Branding

Have you ever wanted to remove the annoying “Powered by InfoPath” logo that users see when accessing InfoPath forms in the browser?

The good news is that this is pretty straightforward.  Just follow these simple steps:

  1. Log on to one of the servers in the SharePoint farm (e.g. an App Server)
  2. Run the following command:  stsadm -o setformsserviceproperty -pn AllowBranding -pv false

Enjoy!

How To Delete the SSP on SP2007

On SharePoint 2007 every now and then you may have a need to delete the SSP so that you can then recreate it.

The trouble is that deleting it through the UI isn’t very reliable.  However, it is reasonably straightforward to do so using stsadm, using the following command line syntax:  stsadm -o deletessp -title “[SSP Title]” -force -deletedatabases

For example: stsadm -o deletessp -title “SSP01″ -force -deletedatabases