Using $PARTITION to find the last time an event occured

In a previous post I showed how you can use the $PARTITION system function to force your query to only look at one or more partitions and today I have a semi-practical example of how to use this.

We have a very big table that holds all our financial transactions along with an integer indicating what type of transaction it is.  I wanted to find out when the last time we recorded a certain transaction type as I think it might have been removed quite some time ago.

This could obviously been done as follows:

SELECT MAX(T.MyDate)
FROM dbo.MyTable T (NOLOCK)
WHERE T.TransTypeId = X

and depending on how your table is structured and indexes this may be the best way to do it.

Our table is partitioned by month on our date column so I can use $PARTITION to search each partition, starting with the most recent, and search for the last date TransTypeID = X was used.  When I find a value we can stop.  This can be done as follows:

DECLARE @Counter INT = 200 -- this should be the maximum partition ID on our table
DECLARE @MaxDate DATETIME2(0);

WHILE @Counter > 0
BEGIN

  RAISERROR('%d', 0,1,@Counter) WITH NOWAIT;

  SELECT @MaxDate = MAX(T.MyDate)
  FROM dbo.MyTable T
  WHERE T.TransTypeId = X
  AND $partition.PF_MyTable(MyDate) = @Counter

  IF @MaxDate IS NOT NULL
  BEGIN
    SELECT @MaxDate, @Counter
    BREAK;
  END

  SET @Counter -= 1;

END;

If TransTypeID X has occurred recently then this code won’t need to loop through many partitions but if it last occurred in partition 1 then this is probably going to be much slower.  It all depends on how your table is set up but I think it’s always nice to have multiple ways to potentially speed up a query.

Advertisements

Parameters in Queries

We have some functionality within our application that generates some search strings for an entity before they are compiled into a JSON document and sent up to Elasticsearch.  We have one row in a table per entity and there is a StatusId column used to track the progress of the generation of the search strings.  There are a few different stored procedures that change the values in the StatusId column and recently we’ve been using a standard piece of code in each of these stored procedures that declares some variables with descriptive names to hold the different StatusIds.  These variables then get used within the stored procedure instead of hard coded values.

So instead of something like

UPDATE dbo.Test
SET StatusId = 2
WHERE StatusId = 1;

we use

DECLARE @Queue INT = 1;
DECLARE @Pickup INT = 2;
DECLARE @Process INT = 3;

UPDATE dbo.Test
SET StatusId = @Pickup
WHERE StatusId = @Queue;

I’ve just been looking into whether SQL Server behaves differently when executing these two queries and have found that it does.  There are potential performance issues where the optimiser gets the estimated number of rows very wrong when using the query with the parameters.

To demonstrate this lets create a table so follows:

CREATE TABLE dbo.Test
(
    Id INT IDENTITY(1,1) NOT NULL PRIMARY KEY,
    StatusId INT NOT NULL,
    SomeText VARCHAR(100) NOT NULL
);
GO

Now we can populate this with 100,00 rows as follows:

INSERT INTO dbo.Test (StatusId, SomeText)
VALUES (1, 'Some test data');
GO 100000

So we now have 100,000 rows where the StatusId = 1.  Let’s update 100 rows to have a StatusId = 2.

UPDATE TOP (100) dbo.Test
SET StatusId = 2
WHERE StatusId = 1;

Let’s update the statistics on the table just to make sure we’re all up to date.

UPDATE STATISTICS dbo.Test;

Now we can run the two different types of query that yield the same results.

SELECT *
FROM dbo.Test
WHERE StatusId = 1;
GO
DECLARE @StatusId INT = 1;
SELECT *
FROM dbo.Test
WHERE StatusId = @StatusId;
GO

If we look at the execution plan we can see the first query without the parameter has managed to estimate the number of rows correctly.

ParametersInQueries01

The query with the parameters has estimated the number of rows as 50,000.

ParametersInQueries02

SQL Server has worked out that there are two possible values in the StatusId column and it’s parametrised the query based on either of those values.  If we update another 100 rows to have a third StatusId will we see the estimated number of rows changes.

UPDATE TOP (100) dbo.Test
SET StatusId = 3
WHERE StatusId = 1;
GO
UPDATE STATISTICS dbo.Test;
GO
SELECT *
FROM dbo.Test
WHERE StatusId = 1;
GO
DECLARE @StatusId INT = 1;
SELECT *
FROM dbo.Test
WHERE StatusId = @StatusId;
GO

ParametersInQueries03

As this is a simple SELECT from one table SQL Server has parametrised the query which can be seen in the execution plan.  Notice the SELECT statement near the top has StatusId = @1.

ParametersInQueries04

This means that SQL Server will use the same execution plan if the query remains the same but with a different value for the StatusId.  So if we change the query to return the 100 rows that are at StatusId = 2 the estimated number of rows is bad for each query.

SELECT *
FROM dbo.Test
WHERE StatusId = 2;
GO
DECLARE @StatusId INT = 2;
SELECT *
FROM dbo.Test
WHERE StatusId = @StatusId;
GO

For the first query we have the following:

ParametersInQueries05

This is only because SQL Server parametrised the query and used the same plan for StatusId = 1 and StatusId = 2 but there are lots of restrictions that stop SQL Server doing this by default.  For example we can add another table that contains the status and create a foreign key to our first table as follows.

CREATE TABLE dbo.Status
(
    Id INT IDENTITY(1,1) NOT NULL PRIMARY KEY,
    StatusDescription VARCHAR(100) NOT NULL
);
GO
INSERT INTO dbo.Status (StatusDescription)
VALUES ('Queue'), ('Pickup'), ('Process');
GO
ALTER TABLE dbo.Test ADD CONSTRAINT FK_Test_StatusId FOREIGN KEY (StatusId) REFERENCES dbo.Status (Id);
GO

Using a JOIN prevent SQL Server from parametrising the query so we can run the following queries, one without using a parameter and one using a parameter.

SELECT *
FROM dbo.Test T
INNER JOIN dbo.Status S ON S.Id = T.StatusId
WHERE S.Id = 2;
GO
DECLARE @StatusId INT = 2;
SELECT *
FROM dbo.Test T
INNER JOIN dbo.Status S ON S.Id = T.StatusId
WHERE S.Id = @StatusId;
GO

We can see that both queries return the same 100 rows.  If we look at the execution plans we can see they are very different.  We can also see that the first query has not been parametrised and the plan contains the actual value of 2.

ParametersInQueries06

ParametersInQueries07

If we check the estimated and actual rows in the first execution plan we can see that the index seek and key lookup both have 100 for both values.  The estimated number of rows for the clustered index scan is still 33333.33 which is the number of rows divided by the number of distinct values in those rows.

Conclusion:

Each situation will be different but in most cases it looks like it is not a good idea to use a descriptive static variable in your code instead of using a hard coded value, i.e. one where the value of the parameter is always going to be the same.  SQL Server doesn’t know that the value in the variable is going to be the same every time and generates an execution plan based on all possible values.

Create CLR objects in SSDT

I don’t know whether the same thing was possible in Data Dude / DB Pro but in SSDT for VS 2012 and VS 2013 it is very simple to create your CLR objects. I did originally think (maybe incorrectly) that you had to create a separate solution for your CLR .NET code that when built would produce a DLL that you included as a reference in your DB project. You then needed to create a CREATE ASSEMBLY script and then a script to create the SQL object, e.g. CREATE FUNCTION…

I’ve recently found out that in SSDT you just create your CLR .NET object within your DB project and when you publish the DB it generates everything for you.

In a previous post I talked about using a CLR aggregate function to concatenate row data into a single string. https://simonlearningsqlserver.wordpress.com/2013/09/10/how-to-concatenate-values-in-multiple-rows-to-a-single-string/

I created this aggregate in SSDT by creating a new SQL CLR C# object in my DB project as shown below.

SSDT CLR  C# Aggregates 1

As you can see in the list on the left there are SQL CLR and SQL CLR C# object types available. The SQL CLR objects are the objects that you see in SQL Server, i.e. the objects that you can create with T-SQL CREATE statements. The SQL CLR C# objects are the things that contain the .Net code.

You can tell SSDT to automatically create the SQL objects required to utilise your new CLR code by opening the properties of the database project that the code exists in.  On the SQLCLR tab there is a checkbox Generate DDL.

SSDT CLR  C# Aggregates 5

If this is checked then SSDT will automatically generate the SQL object in the dbo schema.  

You can see this in the Projects folder in SQL Server Object Explorer in VS as shown below…

SSDT CLR  C# Aggregates 2

… and also once you’ve published the database project you will see the objects in SSMS as shown below.

SSDT CLR  C# Aggregates 3

Above you can see both the assembly that has been created (with the same name as the database project, SimonBlog) and the SQL object (in this case an aggregate function, ConcatMagic).  Any other CLR objects that you add to the same database project will also get published in the same assembly.  You can confirm that the correct .Net method is being used by scripting the aggregate function.

SSDT CLR  C# Aggregates 4

If you want to add your objects to a schema other than dbo then it seems you have to uncheck the Generate DDL checkbox in the database project settings and add the required SQL objects, e.g. the CREATE AGGREGATE script.  Then when you publish SSDT won’t create the object automatically in dbo but will still publish the assmebly and run in the CREATE AGGREGATE script.

How to concatenate values in multiple rows to a single string

We quite often have a requirement (mainly in our reporting) where we need to list, in a single string, a bunch of values for a particular entity that are stored in our DB as a set of rows.

For example if we use the following script…

CREATE TABLE dbo.ConcatTest
(
 Id INT PRIMARY KEY IDENTITY NOT NULL,
 FKId INT NOT NULL,
 Country VARCHAR(100)
);
INSERT INTO dbo.ConcatTest (FKId, Country)
VALUES
(1, 'UK'),
(1, 'France'),
(2, 'US'),
(3, 'Germany'),
(3, 'France'),
(3, 'UK');
SELECT *
FROM dbo.ConcatTest;

This gives us the following data.

Concatenation1

We need a way of returning one row per FKId value along with a concatenated list of related countries

The following blog from  contains several ways of doing this but the one we use almost all of the time is the FOR XML one.

https://www.simple-talk.com/sql/t-sql-programming/concatenating-row-values-in-transact-sql/

i.e. we run the following.

DECLARE @Delimiter VARCHAR(10) = ' '; -- this is the delimiter we will use when we concatenate the values

SELECT DISTINCT
 FKId,
 (SELECT STUFF(
 (SELECT @Delimiter + Country
 FROM dbo.ConcatTest CT2
 WHERE CT2.FKId = CT1.FKId
 FOR XML PATH(''), ROOT('root'), TYPE
 ).value('/root[1]','VARCHAR(MAX)') -- using the .value method allows use to extract special characters such as &. 
 ,1,1,'')) AS Countries
FROM dbo.ConcatTest CT1;

We were originally not including the .values part but we found the following blog from Rob Farley that helped us to handle the characters that would get escaped as part of the conversion to XML.

http://sqlblog.com/blogs/rob_farley/archive/2010/04/15/handling-special-characters-with-for-xml-path.aspx

This gives us the following result.

Concatenation2

However, after reading the first blog I thought I’d give the CLR aggregate a try and it seems to perform a bit better and is a lot easier to use.  I found the following blog from  and decided to use this version.

http://www.mssqltips.com/sqlservertip/2022/concat-aggregates-sql-server-clr-function/

Here is the final CLR aggregate I went with (pretty much pinchded from Andy).

using System;
using System.Collections;
using System.Collections.Generic;
using System.Data.SqlTypes;
using System.IO;
using System.Text;
using Microsoft.SqlServer.Server;
[Serializable]
[SqlUserDefinedAggregate(Format.UserDefined, MaxByteSize = -1)]
public struct ConcatMagic : IBinarySerialize
{
 private StringBuilder _accumulator;
 private string _delimiter;
 public Boolean IsNull { get; private set; }
public void Init()
 {
 _accumulator = new StringBuilder();
 _delimiter = string.Empty;
 this.IsNull = true;
 }
public void Accumulate(SqlString value, SqlString delimiter)
 {
 if (!delimiter.IsNull & delimiter.Value.Length > 0)
 {
 _delimiter = delimiter.Value;
if (_accumulator.Length > 0)
 {
 _accumulator.Append(delimiter.Value);
 }
}
_accumulator.Append(value.Value);
if (value.IsNull == false)
 {
 this.IsNull = false;
 }
 }
public void Merge(ConcatMagic group)
 {
 if (_accumulator.Length > 0 & group._accumulator.Length > 0)
 {
 _accumulator.Append(_delimiter);
 }
_accumulator.Append(group._accumulator.ToString());
 }
public SqlString Terminate()
 {
 return new SqlString(_accumulator.ToString());
 }
void IBinarySerialize.Read(System.IO.BinaryReader r)
 {
 _delimiter = r.ReadString();
 _accumulator = new StringBuilder(r.ReadString());
if (_accumulator.Length != 0) this.IsNull = false;
 }
void IBinarySerialize.Write(System.IO.BinaryWriter w)
 {
 w.Write(_delimiter);
 w.Write(_accumulator.ToString());
 }
}

We can now run the following T-SQL to produce the same result as the FOR XML T-SQL above

DECLARE @Delimiter VARCHAR(10) = ' '; -- this is the delimiter we will use when we concatenate the values
SELECT FKId, dbo.ConcatMagic(Country, @Delimiter)
FROM dbo.ConcatTest
GROUP BY FKId

This gives us the following result.

Concatenation3

Now to test the performance I’ve turned on statistics io and time and included the execution plan.  I get the following statistics. 

Concatenation4

The number of logical reads is much more in the FOR XML T-SQL but the elapsed time is under half the time.  For larger data sets I was getting the elapsed time to be much lower for the CLR aggregate.

I get the following execution plans

Concatenation5

I’ve zoomed into the FOR XML plan so you can see how much more complicated it is for SQL Server to process.  All the work for the aggregate is done by the CLR so obviously it’s not really appropriate here to try to compare the execution plans.  The useful metrics are the reads and time.

Converting a datetime value to yyyy-mm-ddThh:mi:ss.mmmZ string format

We want to send some date field data up to our Elasticsearch instance in the format yyyy-mm-ddThh:mi:ss.mmmZ.  The source of the data is a DATETIME data type column in our SQL Server 2008 R2 database.

According to BOL to get the format we need we need to run the following (in this example I’m just using the GETDATE() function rather than selecting an actual value from a table.

SELECT CONVERT(VARCHAR(50), GETDATE(), 127);

When I run this I don’t get the “Z” character at the end.

In BOL there is a comment that says “The optional time zone indicator, Z, is used to make it easier to map XML datetime values that have time zone information to SQL Server datetime values that have no time zone. Z is the indicator for time zone UTC-0. Other time zones are indicated with HH:MM offset in the + or – direction. For example: 2006-12-12T23:45:12-08:00.

The reason the original query doesn’t return the “Z” is there is no offset data in the DATETIME data type.

We need to add the offset to the data by casting it as a DATETIMEOFFSET data type before converting to the string, i.e.

SELECT CONVERT(VARCHAR(50), CAST(GETDATE() AS DATETIMEOFFSET), 127);

NB. As we are using GETDATE() you will see the millisecond included in the result but if your original data is only precise to the nearest second then the millisecond value won’t appear. If you have got millisecond data then you could use the STUFF function after conversion to the string to remove them.

Script to view the size of all tables in a database

I’ve got a 300 GB database on my local PC containing lots of tables and I wanted to see if there are any tables I can clear down to reduce the size, e.g. log tables. This script will return a list of all the tables in a specific database ordered by the size.

SELECT IDENTITY(INT, 1, 1) AS Id, schema_name(schema_id) as SchemaName, name as TableName
INTO #tables
FROM sys.tables
WHERE type = 'U';
DECLARE @Counter INT = 1;
DECLARE @MaxCounter INT = (SELECT MAX(Id) FROM #tables);
DECLARE @Table VARCHAR(100)

CREATE TABLE #TableSize
(
 name VARCHAR(100)
 ,rows int
 ,reserved VARCHAR(100)
 ,data VARCHAR(100)
 ,index_size VARCHAR(100)
 ,unused VARCHAR(100)
);
WHILE @Counter <= @MaxCounter
BEGIN
SELECT @Table = SchemaName + '.' + TableName
 FROM #tables
 WHERE Id = @Counter;
INSERT INTO #TableSize
 EXEC sp_spaceused @Table;
SET @Counter += 1;
END;
SELECT *
FROM #TableSize
ORDER BY CAST(SUBSTRING(index_size, 1, LEN(index_size) - 3) AS INT) DESC;
DROP TABLE #tables;
DROP TABLE #TableSize;

Updated on 24/09/2013

I’ve just found this

EXEC sp_msforeachtable 'EXEC sp_spaceused [?];

It’s undocumented but might be easier that doing the loop above.

Don’t disable your Service Broker queues (if you care about the order in which the queues process your messages)

I’m a SQL developer within a team developing a new framework that will be used to replace our multiple legacy systems.  We ETL certain data from certain legacy entites to our new system and use Service Broker in this process.  Our legacy system sends the Id of the Entity in a Service Broker conversation and then doesn’t worry about what happens after that.  This asynchronous processing is why we chose to use Service Broker.

In our legacy system we run certain processes after an entity is saved on our administration website and the final one of these processes is to send the entity to Service Broker.  Our legacy system processes any related entities in a particular order and these all get sent to Service Broker in that same order.  As the ordering is handled by the legacy system we didn’t implement any priorities on our Service Broker queues – it just receives the messages in the order we sent them with the default priority of 5.

However, yesterday we had an issue where a new user was created on our administration website.  Our users all exist within a container and the permissions for that user are set on the container allowing us to give the same permissions to multiple users by adding them to the same containers.  This new user was part of a new container and in this situation the legacy system processes the new container first and sends it to Service Broker and then processes the user and then sends that.  Service Broker receives the container entity first and we ETL over the required data.  Service Broker then receives the user entity and we ETL that.  Because of the database model in the new system the container must exist before the user can be added to it so we always have to ETL over the entities in this order.

Shortly after the new user was created we received a call saying they are unable to access the new system.  I had a look and for some reason the user hadn’t been ETL’d across.  The new container was there but not the user.  I checked the logs and saw that even though we’d sent the container before the user, Service Broker had received the user message before the container message.

I checked the logs to see if this was happening often and found several occurrences on certain days when the receive order seemed to differ from the send order.  The times these happened all seemed to occur roughly when we had a live release.

Before each go live we disable our Service Broker queues to temporarily stop the ETL process.  While the queues are disabled the messages queue up on the transmission queue (sys.transmission_queue) and then when we enable the queues they are sent from the transmission queue to the Service Broker.  However, the transmission queue does not seem to care about the order in which it received the messages so it sends them up to the Service Broker queues in a random order.  This is how we ended up processing the user entity before the container entity.

On MSDN the documentation states that “Service Broker guarantees that a program receives each message in a conversation exactly once in the order in which the message was sent, not the order in which the message entered the queue“.  (See full article here…http://msdn.microsoft.com/en-us/library/ms166049(v=sql.105).aspx).  This statement only seems to hold true if the queues are enabled.

I’ve have since seen on the MSDN site that they say “..it is rarely necessary to stop a queue once the queue is made available” so I don’t think our stopping of the queues is recommended.  (See full article here… http://msdn.microsoft.com/en-us/library/ms166046(v=sql.105).aspx).

We have a stored procedure associated with our target queue and this is the thing that runs the RECEIVE statement. In future we are just going to disable the activation on the queue so that the messages are sent to Service Broker but not received until after our go live is complete.  This will ensure the messages are picked up in the same order they are sent.

I’ve written a bit of SQL to demonstrate the ordering with the queues enabled and disabled.

1) Create a new database and set up the service broker queues.  I’ve created two priorities, 3 and 7 so I can demo how Service Broker uses these.

USE master
IF EXISTS (SELECT * FROM sys.databases WHERE name = 'ServiceBrokerTesting')
BEGIN
 DROP DATABASE ServiceBrokerTesting;
END;
IF NOT EXISTS (SELECT * FROM sys.databases WHERE name = 'ServiceBrokerTesting')
BEGIN
 CREATE DATABASE ServiceBrokerTesting;
 ALTER DATABASE ServiceBrokerTesting SET ENABLE_BROKER;
 ALTER DATABASE ServiceBrokerTesting SET TRUSTWORTHY ON;
 ALTER DATABASE ServiceBrokerTesting SET HONOR_BROKER_PRIORITY ON;
END;
GO
USE ServiceBrokerTesting
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '12345ABC$'
-- Create the service broker objects
IF NOT EXISTS (SELECT * FROM sys.service_message_types WHERE name = 'TestType')
BEGIN
 CREATE MESSAGE TYPE TestType VALIDATION = NONE;
END;
IF NOT EXISTS (SELECT * FROM sys.service_contracts WHERE name = 'TestContract')
BEGIN
 CREATE CONTRACT TestContract (TestType SENT BY ANY);
END;
IF NOT EXISTS (SELECT * FROM sys.service_queues WHERE name = 'TestInitQueue')
BEGIN
 CREATE QUEUE TestInitQueue WITH STATUS=ON, RETENTION=OFF;
END;
IF NOT EXISTS (SELECT * FROM sys.service_queues WHERE name = 'TestTargetQueue')
BEGIN
 CREATE QUEUE TestTargetQueue WITH STATUS=ON, RETENTION=OFF;
END;
IF NOT EXISTS (SELECT * FROM sys.services WHERE name = 'TestLowInit')
BEGIN
 CREATE SERVICE TestLowInit
 ON QUEUE dbo.TestInitQueue (TestContract);
END;
IF NOT EXISTS (SELECT * FROM sys.services WHERE name = 'TestLowTarget')
BEGIN
 CREATE SERVICE TestLowTarget
 ON QUEUE dbo.TestTargetQueue (TestContract);
END;
IF NOT EXISTS (SELECT * FROM sys.services WHERE name = 'TestHighInit')
BEGIN
 CREATE SERVICE TestHighInit
 ON QUEUE dbo.TestInitQueue (TestContract);
END;
IF NOT EXISTS (SELECT * FROM sys.services WHERE name = 'TestHighTarget')
BEGIN
 CREATE SERVICE TestHighTarget
 ON QUEUE dbo.TestTargetQueue (TestContract);
END;
IF NOT EXISTS (SELECT * FROM sys.conversation_priorities WHERE name = 'TestLowPriority')
BEGIN
 CREATE BROKER PRIORITY TestLowPriority
 FOR CONVERSATION
 SET (CONTRACT_NAME = TestContract,
 LOCAL_SERVICE_NAME = TestLowTarget,
 REMOTE_SERVICE_NAME = N'TestLowInit',
 PRIORITY_LEVEL = 3);
END;
IF NOT EXISTS (SELECT * FROM sys.conversation_priorities WHERE name = 'TestHighPriority')
BEGIN
 CREATE BROKER PRIORITY TestHighPriority
 FOR CONVERSATION
 SET (CONTRACT_NAME = TestContract,
 LOCAL_SERVICE_NAME = TestHighTarget,
 REMOTE_SERVICE_NAME = N'TestHighInit',
 PRIORITY_LEVEL = 7);
END;

2) Now we’re going to send 50 messages.  The messages will contain an integer 1 to 50 and we’ll send them in order.  All messages will sent using priority 3 except for every 10th one which will use priority 7.

-- Create conversations
-- Use message priority 3 except for every 10th one use priority 7
DECLARE @dialog_handle UNIQUEIDENTIFIER ;
DECLARE @Counter INT = 1;
WHILE @Counter <= 50
BEGIN
IF @Counter % 10 = 0
 BEGIN
BEGIN DIALOG CONVERSATION @dialog_handle
 FROM SERVICE [TestHighInit]
 TO SERVICE 'TestHighTarget'
 ON CONTRACT [TestContract];
SEND ON CONVERSATION @dialog_handle
 MESSAGE TYPE [TestType] (@Counter);
END
ELSE
 BEGIN
BEGIN DIALOG CONVERSATION @dialog_handle
 FROM SERVICE [TestLowInit]
 TO SERVICE 'TestLowTarget'
 ON CONTRACT [TestContract];
SEND ON CONVERSATION @dialog_handle
 MESSAGE TYPE [TestType] (@Counter);
END;
END CONVERSATION @dialog_handle WITH CLEANUP;
SET @Counter += 1;

END;

3) Now create a table to write the messages as we receive them.

CREATE TABLE dbo.Receiver
(
 Id INT IDENTITY(1,1) NOT NULL
 ,priority INT NOT NULL
 ,queuing_order INT NOT NULL
 ,service_name VARCHAR(20) NOT NULL
 ,message INT NOT NULL
);

4) Run the RECEIVE statement to read off the messages.

DECLARE @priority INT
DECLARE @queuing_order INT
DECLARE @service_name VARCHAR(20)
DECLARE @message INT
WHILE (1=1)
BEGIN
WAITFOR (
 RECEIVE TOP (1)
 @priority = priority,
 @queuing_order = queuing_order,
 @service_name = service_name,
 @message = CAST(message_body AS INT)
 FROM TestTargetQueue 
 ), TIMEOUT 6000;
IF @@ROWCOUNT = 0
 BEGIN
 BREAK;
 END;
INSERT INTO dbo.Receiver (priority, queuing_order, service_name, message)
 VALUES (@priority, @queuing_order, @service_name, @message);
END;

5) Check the contents of the table to prove that the messages were received firstly in priority order and then in the order they were sent.  In this case messages were received in the order 10, 20, 30, 40, 50, 1, 2, 3, etc, as the messages that were a multiple of 10 were sent with a higher priority.

SELECT *
FROM dbo.Receiver;

6) Truncate the table, disable the queues and resend the messages.

TRUNCATE TABLE dbo.Receiver
GO
-- Disable the queues
IF EXISTS (SELECT * FROM sys.service_queues WHERE name = 'TestInitQueue')
BEGIN
 ALTER QUEUE TestInitQueue WITH STATUS=OFF, RETENTION=OFF;
END;
IF EXISTS (SELECT * FROM sys.service_queues WHERE name = 'TestTargetQueue')
BEGIN
 ALTER QUEUE TestTargetQueue WITH STATUS=OFF, RETENTION=OFF;
END;
-- Resend the messages but this time don't close the conversations
DECLARE @dialog_handle UNIQUEIDENTIFIER ;
DECLARE @Counter INT = 1;
WHILE @Counter <= 50
BEGIN
IF @Counter % 10 = 0
 BEGIN
BEGIN DIALOG CONVERSATION @dialog_handle
 FROM SERVICE [TestHighInit]
 TO SERVICE 'TestHighTarget'
 ON CONTRACT [TestContract];
SEND ON CONVERSATION @dialog_handle
 MESSAGE TYPE [TestType] (@Counter);
END
 ELSE
 BEGIN
BEGIN DIALOG CONVERSATION @dialog_handle
 FROM SERVICE [TestLowInit]
 TO SERVICE 'TestLowTarget'
 ON CONTRACT [TestContract];
SEND ON CONVERSATION @dialog_handle
 MESSAGE TYPE [TestType] (@Counter);
END;
SET @Counter += 1;

END;

7) As you can see the messages are all just sitting in sys.transmission_queue.

SELECT *
FROM sys.transmission_queue;

8) Enable the queues.

IF EXISTS (SELECT * FROM sys.service_queues WHERE name = 'TestInitQueue')
BEGIN
 ALTER QUEUE TestInitQueue WITH STATUS=ON, RETENTION=OFF;
END;
IF EXISTS (SELECT * FROM sys.service_queues WHERE name = 'TestTargetQueue')
BEGIN
 ALTER QUEUE TestTargetQueue WITH STATUS=ON, RETENTION=OFF;
END;
GO

9) Run the RECEIVE statement again and log to the table.

DECLARE @priority INT
DECLARE @queuing_order INT
DECLARE @service_name VARCHAR(20)
DECLARE @message INT
WHILE (1=1)
BEGIN

 WAITFOR (
 RECEIVE TOP (1)
 @priority = priority,
 @queuing_order = queuing_order,
 @service_name = service_name,
 @message = CAST(message_body AS INT)
 FROM TestTargetQueue 
 ), TIMEOUT 6000;

 IF @@ROWCOUNT = 0
 BEGIN
 BREAK;
 END;
INSERT INTO dbo.Receiver (priority, queuing_order, service_name, message)
 VALUES (@priority, @queuing_order, @service_name, @message);
END;

10) Check the contents of the table.  As you can see Service Broker has honoured the priority but the messages within each priority are not ordered correctly.

SELECT *
FROM dbo.Receiver;

In summary if you need to ensure that the messages are received in the order they were sent then you should not disable your Service Broker queues.  Either disable the activation on the target queue or disable the thing that does the initial send.