While researching the steps required to service pack SQL Server on an active active cluster I came across a lot of conflicting information. Here are the steps I ultimately followed for an error and issue free update.
1. Make a note of all node names (and/or IP addresses), SQL Server virtual names along with preferred nodes. If there are more than three nodes you may need to also take note of possible owners for each SQL resource group. For my example assume that I have a cluster with node1 and node2, SQL1 normally lives on node1 and SQL2 normally lives on node2.
2. To start with a clean slate and ensure any previous updates are completed both nodes should be restarted if possible. Choose the physical node that you you want to patch second and restart that node (in my example node2).
3. Restart the node you want to patch first (node1). This will mean that both active SQL instances are now running on node2. Some restarts will be essential, but you could avoid the first two restarts if you need to keep downtime to a minimum and just fail SQL1 over to node2. The main point here is to always patch a passive node.
4. In cluster administrator remove node1 from the possible owners lists of SQL1 and SQL2. This means that neither SQL instance can fail over to node1 while it is being patched.
5. Run the service pack executable on node1.
6. Restart node1.
7. Add node1 back into the possible owners lists of SQL1 and SQL2 and fail both instances over to node1.
8. Repeat steps 4 – 6 on node2.
9. Add node2 back into the possible owners lists of SQL1 and SQL2 and fail both instances over to node2. Check that the build level is correct and review the SQL Server error logs.
10. Fail SQL1 over to node1. Check build levels and SQL Server error logs.
It seems like a lot of bouncing of SQL instances backwards and forwards but as Brent Ozar has pointed out the very reason you spend the money and time resourcing and building a fail-over cluster is to have the ability to fail SQL instances transparently between physical nodes.