SQL Server : Merge replication fails due to timeout errors  

Solution for : Merge replication fails due to timeout errors

QUESTION:

You administer several Microsoft SQL Server 2012 database servers. Merge replication has been configured for an application that is distributed across offices throughout a wide area network (WAN).

Many of the tables involved in replication use the XML and varchar (max) data types.Occasionally, merge replication fails due to timeout errors.

You need to reduce the occurrence of these timeout errors. What should you do?

Answer:

When you synchronize data rows with a large amount of data, such as rows with LOB columns, Web synchronization can require additional memory allocation and hurt performance. This occurs when the Merge Agent generates an XML message that contains too many data rows with large amounts of data. If the Merge Agent is consuming too many resources during Web synchronization, reduce the number of rows sent in a single message in one of the following ways:
  • Use the slow link agent profile for the Merge Agent.
  • Decrease the -DownloadGenerationsPerBatch and -UploadGenerationsPerBatch parameters for the Merge Agent to a value of 10 or less. The default value of these parameters is 50.
Note: Merge Agent has a “slow link” profile designed for low bandwidth connections.

How to allow merge agent to use “slow link profile” : Change Existing Agents: 

Select a profile (On the General page of the Distributor Properties – <Distributor> dialog box, click Profile Defaults), and then click Change Existing Agents to specify that all existing jobs for an agent of a given type should use the selected profile. For example, if you have created a number of subscriptions to a merge publication, and you want to change the profile to specify that the Merge Agent job for each of these subscriptions should use the Slow link agent profile, select that profile, and then click Change Existing Agents.
Advertisements

SQL Server ASYNC Network IO Wait Type

Note: For SQL2000 this wait is called NETWORKIO.

The ASYNC_NETWORK_IO wait indicates that one of two scenarios are happening. The first scenario is that the session (i.e., SPID) is waiting for the client application to process the result set and send a signal back to SQL Server that it is ready to process more data. The second is that there may be a network performance issue.

Reducing SQL Server waits / wait times

If there are significant wait times on ASYNC_NETWORK_IO you have the following options:

  • Review the queries and identify large result sets. Verify that the client application is consuming data as efficiently as possible. For example, if the application is asking for a million rows of data but only processing one row at a time.
  • Review that all rows being requested are necessary. Often times it is the case that you can reduce this wait by filtering the result set for only the rows that are needed. Using the TOP clause may be an option as well. Client applications such as Microsoft Access may benefit from querying a view instead of pulling data from an entire table.

If the above tuning tips are reviewed and applied, but the server is still encountering high wait times, then ensure there aren’t any network-related issues:

  • Validate the network components between the application/clients and the SQL Server instance (router, for example).
  • Look at your NIC configuration on the server to make sure there are no issues with the physical card. Also, check if autodetect is picking the fastest speed.
  • Check network adapter bandwidth: 1 Gigabit is better than 100 megabits, and 100 megabits is better than 10 megabits.

Also worth mention is the common practice of performing data loads on the server. It is possible that you may be seeing the ASYNC_NETWORK_IO wait during the times that the data loads are occurring.If this is the case then make sure the shared memory protocol is enabled for the SQL Server instance and the session is connected using net_transport = ‘Shared memory’. You can determine the net_transport for the connection by looking at the DMV – sys.dm_exec_connections.

CHECKPOINT

Checkpoint is an internal process that writes all dirty pages (modified pages) from Buffer Cache to Physical disk, apart from this it also writes the log records from log buffer to physical file. Writing of Dirty pages from buffer cache to data file is also known as Hardening of dirty pages. It is a dedicated process and runs automatically by SQL Server at specific intervals. SQL Server runs checkpoint process for each Database individually.
Operation that occur when checkpoint is issued
1) All dirty data file pages for the database are written to disk (all pages that have changed in memory since they were read from disk or since the last checkpoint), regardless of the state of the transaction that made the change.
2) Before a page is written to disk, all log records up to and including the most recent log record describing a change to that page are written to disk. This guarantees recovery can work and is called write-ahead logging. Log records are written to the log sequentially, and log records from multiple transactions will be interspersed in the log. The log cannot be selectively written to disk, so writing a dirty page to disk that only has a single log record affecting it may mean writing many more previous log records to disk as well.
3) Log records describing the checkpoint are generated.
4) The LSN of the checkpoint is recorded in the database boot page in the dbi_checkptLSN field
5) If in the SIMPLE recovery model, the VLFs in the log are checked to see whether they can be marked inactive (called clearing or truncating the log – both of which are terrible misnomers, as nothing is either physically cleared or truncated).

Four types of Checkpoints:
———————————
Automatic: This is the most common checkpoint which runs as a process in the background to make sure SQL Server Database can be recovered in the time limit defined by the Recovery Interval – Server Configuration Option.

Indirect: This is new in SQL Server 2012. This also runs in the background but to meet a user-specified target recovery time for the specific Database where the option has been configured. Once the Target_Recovery_Time for a given database has been selected this will override the Recovery Interval specified for the server and avoid Automatic Checkpoint on such DB.

Manual: This one runs just like any other T-SQL statement, once you issue checkpoint command it will run to its completion. Manual Checkpoint runs for your current Database Only. You can also specify the Checkpoint_Duration which is optional, this duration specifies the time in which you want your checkpoint to complete.

Internal: As a user you can’t control Internal Checkpoint. Issued on specific operations such as:
1. Shutdown initiates a Checkpoint operation on all databases except when Shutdown is not clean (Shutdown with nowait)
2. If the recovery model gets changed from Full\Bulk-logged to Simple.
3. While taking Backup of the Database.
4. If your DB is in Simple Recovery model, checkpoint process executes automatically either when the log becomes 70% full, or based on Server option-Recovery Interval.
5. Alter Database command to add or remove a data\log file also initiates a checkpoint.
6. Checkpoint also takes place when the recovery model of the DB is Bulk-Logged and a minimally logged operation is performed.
7. DB Snapshot creation.

Delete headers in Word Documents

I used Red Gate Trial Version to Generate my SQL server database Documentation and I Need to remove the automatic generated Header and Footer by the software and this saved my day.It isnt that hard to be a DBA. 🙂
I used following in the VBA and saved my self from 2000 page document.



Sub RemoveHeadAndFoot() Dim oSec As Section Dim oHead As HeaderFooter Dim oFoot As HeaderFooter For Each oSec In ActiveDocument.Sections For Each oHead In oSec.Headers If oHead.Exists Then oHead.Range.Delete Next oHead For Each oFoot In oSec.Footers If oFoot.Exists Then oFoot.Range.Delete Next oFoot Next oSec End Sub

Knowing your indexes


This following query gives you
list of all the indexes
,the tables that they reside in
,how many times they’re been searched
,how many times they’ve been updated
,and the size of the indexes in memory.


SELECT 
	sch.name + '.' + t.name AS [Table Name], 
	i.name AS [Index Name], 
	i.type_desc,  
	ISNULL(user_updates,0) AS [Total Writes], 
	ISNULL(user_seeks + user_scans + user_lookups,0) AS [Total Reads],
	s.last_user_seek, 
	s.last_user_scan ,
	s.last_user_lookup,
	ISNULL(user_updates,0) - ISNULL((user_seeks + user_scans + user_lookups),0) AS [Difference],  
	p.reserved_page_count * 8.0 / 1024 as SpaceInMB
FROM sys.indexes AS i WITH (NOLOCK)  
	LEFT OUTER JOIN sys.dm_db_index_usage_stats AS s	WITH (NOLOCK) ON s.object_id = i.object_id  AND i.index_id = s.index_id  AND s.database_id=db_id()  AND objectproperty(s.object_id,'IsUserTable') = 1  
	INNER JOIN		sys.tables					AS t	WITH (NOLOCK) ON i.object_id = t.object_id  
	INNER JOIN		sys.schemas					AS sch	WITH (NOLOCK) ON t.schema_id = sch.schema_id  
	LEFT OUTER JOIN sys.dm_db_partition_stats	AS p	WITH (NOLOCK) ON i.index_id = p.index_id and i.object_id = p.object_id
WHERE (1=1)
	--AND ISNULL(user_updates,0) >= ISNULL((user_seeks + user_scans + user_lookups),0) --shows all indexes including those that have not been used  
	--AND ISNULL(user_updates,0) - ISNULL((user_seeks + user_scans + user_lookups),0)>0 --only shows those indexes which have been used  
	--AND i.index_id > 1			-- Only non-first indexes (I.E. non-primary key)
	--AND i.is_primary_key<>1		-- Only those that are not defined as a Primary Key)
	--AND i.is_unique_constraint<>1 -- Only those that are not classed as "UniqueConstraints".  
ORDER BY [Table Name], [index name]

Source : Internet

I should’ve looked the other way

I am feeling exactly the same as Rob Farley must have felt 🙂

© 2011 Rob Farley

Verse 1:
My query sucks – it takes too long
So long I wrote this song
The plan’s not big – it ain’t a giant
And yet I have an angry client
Performance now has made her weary
So I’ve come in to fix her query
I promise I won’t ever fail her
Say “Trust me, love, I’m from Australia!”

Chorus:
I need to find you
But I don’t want to search every row
My predicate’s residual
My seek just runs too slow
I thought I’d caught a glimpse of you
Been searching for all day
But all along, I’d done it wrong
I should’ve looked the other way

Verse 2:
A trace is on, I know the reads
That fetch the bytes the query needs
There’s spooling from a CTE
They’ve got recursion needlessly
I need to dig a little further
I worry there might be a cursor
The DBA has the plan_handle
He says it’s not corrupt, he knows Paul Randal!

Repeat chorus

Bridge:
There is an index covering predicates with keys
But my developer has used inequalities
There is a range scan
Hiding truth
Hiding cost
Hiding you…

Repeat chorus x2

I should’ve looked the other way