Mirroring vs log shipping

Database mirroring is functionality in the SQL Server engine that reads from the transaction log and copies transactions from the principal server instance to the mirror server instance. Database mirroring can operate synchronously or asynchronously. If configured to operate synchronously, the transaction on the principal will not be committed until it is hardened to disk on the mirror. Database mirroring supports only one miror for each principal database. Database mirroring also supports automatic failover if the principal database becomes unavailable. The mirror database is always offline in a recovering state, but you can create snapshots of the mirror database to provide read access for reporting, etc.

Log shipping is based on SQL Server Agent jobs that periodically take log backups of the primary database, copy the backup files to one or more secondary server instances, and restore the backups into the secondary database(s). Log shipping supports an unlimited number of secondaries for each primary database.

Database mirroring is preferable to log shipping in most cases, although log shipping does have the following advantages:

1. it provides backup files as part of the process
2. multiple secondaries are supported
3. it is possible to introduce a fixed delay when applying logs to allow the secondary to be used for recovering from user error

More information about both technologies is available in SQL Server 2005 Books Online in the topics “Understanding Log Shipping” and “Overview of Database Mirroring”.


Thus, there is a trade-off between the speed of retrieving data from a table and the speed of updating the table. For example, if a table is primarily read-only, having more indexes can be useful; but if a table is heavily updated, having fewer indexes could be preferable.


For as many OEMs I’ve setup, the one metric that has always amazed me has been the  “Tablespace Space Used (%)” metric.  This metic is often misunderstood although it “should” be quite simple to understand.  What is so hard to understand about percentage (%) used?

In reviewing the documentation for OEM 11g and OEM 12c, the explanation for this metric has not changed much between releases.  The calculation that is performed to trigger this metic is really simple math:

Tablespace Space Used (%) = (TotalUsedSpace / MaximumSize) * 100

Once this metric has been triggered, most DBAs start scrambling to perform one of the following task:

  • Increase the size of the tablespace
  • Reorganizing the entire tablespace (fragmentation issues)
  • Relocate segements to another tablespace
  • Run Segment Advisor on the tablespace

What I have come to find out is, some times OEM will trigger this metric and the data files may not need any adjustments.  In order to get a clearer understanding of what caused this metric to trigger, we need to look at the “fulTbsp.pl” script.  This script is located in the $AGENT_HOME/sysman/admin/scripts directory.

In reviewing the “fulTbsp.pl” script, Oracle is not only looking at the current size of the data files and the maxsize of the datafile; they are looking at the file system space as well.  The reason for this is to ensure that the data files have enough space to expand if needed.

Now, here is where it can become misleading.  By setting the Tablespace Space Used (%) metric for critical to 95, we are thinking that the metric will trigger when the tablespace reaches 95% used, correct.  Before rushing to perform the tasks above, lets check and see what space is actually used in the tablespace.  In order to do this, Oracle provides us with a DBA view (DBA_TABLESPACE_USAGE_METRICS) to review the percentage of tablespace used.  Below I have provided a sample query for getting the usage of a tablespace:

round(used_percent, 2)
round(used_percent,2) > 90;

Often, I have found that when an alert is triggered for the Tablespace Space Used (%) metric, the data files are less than 90% full.  This is due to the alert being triggered because OEM makes the determination that there is not enough space on the file system to expand the data file if needed.  If you keep this in mind, you’ll be able to keep a firm grasp on what is going on from the OEM and your tablespaces.


source: dbasolved

thus, therefore and hence are different

A simple way of distinguishing and using these words accurately:

1. ‘Thus’ means ‘in this/that way’ – it relates to ‘HOW’ – the manner in which – this or that happens or comes about. It has a practical flavour. eg.Traditionally, you arrange things thus = Traditionally, this is how you arrange things

2 .’Therefore’ means ‘for this reason’, or ‘because of this or that’ – it relates to deductive reasoning, it tells WHY this or that is so, or happened. eg. He was late and therefore missed the bus = he was late and for this reason missed the bus

3. ‘Hence’ means ‘from this/that’ – it relates to WHERE – position, or point in time; it tells from where or what, or to where or what, something comes, derives, or goes eg. -i. Get thee hence! = Get yourself away from here! -ii. Henceforth all entrances will be guarded = From now on all entrances will be guarded -iii. She got the job – hence her good spirits = She got the job and her good spirits derive from that fact. (Note the different slant to ‘therefore’, which would also fit, but would say ” her good spirits are due to (’because of’; ‘for that reason’) that”.

Eg :

Thus: This thing is a balloon, and thus is made of rubber and inflates when you blow into it.

Therefore: This thing inflates when you blow into it and is made of rubber; therefore, it is a balloon.

Hence: This thing is called a balloon, hence it must inflate and be made of rubber.


Data Type Precedence (Transact-SQL)

SQL Server uses the following precedence order for data types:

  1. user-defined data types (highest)
  2. sql_variant
  3. xml
  4. datetimeoffset
  5. datetime2
  6. datetime
  7. smalldatetime
  8. date
  9. time
  10. float
  11. real
  12. decimal
  13. money
  14. smallmoney
  15. bigint
  16. int
  17. smallint
  18. tinyint
  19. bit
  20. ntext
  21. text
  22. image
  23. timestamp
  24. uniqueidentifier
  25. nvarchar (including nvarchar(max) )
  26. nchar
  27. varchar (including varchar(max) )
  28. char
  29. varbinary (including varbinary(max) )
  30. binary (lowest)


Datatype Precedence

Oracle uses datatype precedence to determine implicit datatype conversion, which is discussed in the section that follows. Oracle datatypes take the following precedence:

  • Datetime and interval datatypes
  • Character datatypes
  • All other built-in datatypes

SQL Server : Merge replication fails due to timeout errors  

Solution for : Merge replication fails due to timeout errors


You administer several Microsoft SQL Server 2012 database servers. Merge replication has been configured for an application that is distributed across offices throughout a wide area network (WAN).

Many of the tables involved in replication use the XML and varchar (max) data types.Occasionally, merge replication fails due to timeout errors.

You need to reduce the occurrence of these timeout errors. What should you do?


When you synchronize data rows with a large amount of data, such as rows with LOB columns, Web synchronization can require additional memory allocation and hurt performance. This occurs when the Merge Agent generates an XML message that contains too many data rows with large amounts of data. If the Merge Agent is consuming too many resources during Web synchronization, reduce the number of rows sent in a single message in one of the following ways:
  • Use the slow link agent profile for the Merge Agent.
  • Decrease the -DownloadGenerationsPerBatch and -UploadGenerationsPerBatch parameters for the Merge Agent to a value of 10 or less. The default value of these parameters is 50.
Note: Merge Agent has a “slow link” profile designed for low bandwidth connections.

How to allow merge agent to use “slow link profile” : Change Existing Agents: 

Select a profile (On the General page of the Distributor Properties – <Distributor> dialog box, click Profile Defaults), and then click Change Existing Agents to specify that all existing jobs for an agent of a given type should use the selected profile. For example, if you have created a number of subscriptions to a merge publication, and you want to change the profile to specify that the Merge Agent job for each of these subscriptions should use the Slow link agent profile, select that profile, and then click Change Existing Agents.