Greetings!

Welcome to my technical blog.

I want to collect here the first-hand experiences, tips, and best-practices that I use frequently. I hope you find something here of interest to you :-).

There is no ‘standard’ to my posts. Some are blindingly obvious – where I just had to note something down that tripped me up. Other posts (hopefully) will be deeper.

Mostly I write in the first-person “I did this and it worked”, rather than “if you do this it may work if your environment it similar to mine”. In the hope that you can adapt my succinct notes (that ACTUALLY worked), to your situation.

Advertisements

Whats New!

This very handy little script lists stored-procedures, tables, etc with the most recent at the top.

Great when you have been away, or even as the foundation of a migration tracking SSRS report.

-- WhatsNew.sql

SELECT [type_desc],
       (SELECT [name] FROM sys.schemas WHERE schema_id = ob.schema_id) [schema],
       CASE parent_object_id
           WHEN '0' THEN [name]
           ELSE OBJECT_NAME (parent_object_id) + '.' + [name]
       END [object_name],
       create_date,
       modify_date -- or create-date if there isn't one
FROM sys.objects ob
WHERE is_ms_shipped = 0 -- exclude system-objects
--AND [type] = 'P' -- just stored-procedures
-- ORDER BY [schema] DESC, modify_date DESC
ORDER BY modify_date DESC;

Audit Logins (light)

This is a partial update of my “DBA Audit” post, using code more suited to SQL 2014 and beyond.

Before a migration I created a job called “Audit Logins” scheduled to run every minute to help flag unused logins.

The first step ‘setup’ creates and populates a table with all enabled logins …

/* initial setup */

	/* create table */

	CREATE TABLE [master].[dbo].[LoginAudit] (
		LoginName VARCHAR(200), LastLoginDate DATETIME)

	/* populate with logins */

	INSERT INTO [master].[dbo].[LoginAudit] (LoginName, LastLoginDate)
		SELECT [name], NULL 
		FROM [master].[sys].[server_principals] 
		WHERE type  'R' /* is not a Role */
		AND is_disabled  1; /* is not Disabled */

Step-1 fails after the first run by design (as the table already exists) and continues onward with step-2 ‘update’ …

/* update logins */

	SELECT MAX(login_time) LoginTime, login_name LoginName
	INTO #LoginTempTable
	FROM [sys].[dm_exec_sessions]
	WHERE login_name  '' /* exclude ef */
	GROUP BY login_name;

	UPDATE [master].[dbo].[LoginAudit]
	SET LastLoginDate = tmp.LoginTime 
	FROM #LoginTempTable tmp
	WHERE LoginAudit.LoginName = tmp.LoginName;

I called it ~light as it is designed to have one row per login. Therefore if it is forgotten, and runs for years, the audit table will never grow.

Copying all tables to a new database

As part of an e-commerce re-write project I was tasked with copying over 300 tables from one database to another, including all data, identity columns, indexes, constraints, primary and foreign keys.

I was unable to simply backup / restore due to space and security issues. Here is my solution (actioned on the target server for speed) …

1. Script Create statements for every table. On the ‘old’ database I expanded the database and clicked ‘Tables’ then clicked ‘View’ / ‘Object Explorer Details’ so all the tables were listed in the right-hand pane. Then I was able to highlight all the tables there, and right-click ‘Script Table as’ / ‘Create To’ / ‘New Query Editor Window’.

When finished I changed connection to the ‘new’ empty database and ran the script to create all the tables – without data.

2. Disable all foreign-key constraints. (from here https://stackoverflow.com/questions/11639868/temporarily-disable-all-foreign-key-constraints). I ran this script on the new database …

-- disable fks
use targetdb
go

DECLARE @sql NVARCHAR(MAX) = N'';

;WITH x AS 
(
  SELECT DISTINCT obj = 
      QUOTENAME(OBJECT_SCHEMA_NAME(parent_object_id)) + '.' 
    + QUOTENAME(OBJECT_NAME(parent_object_id)) 
  FROM sys.foreign_keys
)
SELECT @sql += N'ALTER TABLE ' + obj + ' NOCHECK CONSTRAINT ALL;
' FROM x;

EXEC sp_executesql @sql;

3. Populate all the tables using the SSIS wizard. In SSMS I right-clicked the old database / ‘Tasks’, ‘Export Data…’. In the wizard I accepted the old database as the Source and typed in the new database details as the Target. I ticked all tables, and clicked ‘edit Mappings’ to tick ‘Enable identity insert’. I then deselected the Views, and executing the SSIS package.

4. To Re-enable all foreign keys – I ran this script (from the same web page as 2.) on the new database …

-- re-enable fks
use targetdb
go

DECLARE @sql NVARCHAR(MAX) = N'';

;WITH x AS 
(
  SELECT DISTINCT obj = 
      QUOTENAME(OBJECT_SCHEMA_NAME(parent_object_id)) + '.' 
    + QUOTENAME(OBJECT_NAME(parent_object_id)) 
  FROM sys.foreign_keys
)
SELECT @sql += N'ALTER TABLE ' + obj + ' WITH CHECK CHECK CONSTRAINT ALL;
' FROM x;

EXEC sp_executesql @sql;

To check progress I used my old ‘database_compare’ script.

Deadlock from EF code

Entity Framework was squirting a raw SELECT statement at the database and causing deadlocks.

To fix, I captured the query text with sp_BlitzLock and executed it in Plan Explorer.

Plan Explorer confirmed that the data was being retrieved using a non-clustered index combined with the clustered-index.

The Plan Explorer / Index Analysis tab, showed the non-clustered index had failed to retrieve over 15 columns (colored pink).

I was able to create a new index that covered 100% of the columns within the Index Analysis screen.

I executed the query again to confirm it was no longer using the clustered index, and was therefore quicker and less likely to cause a deadlock.

Improving sp_BlitzLock

I notice the output from sp_BlitzLock surfaces object_id’s in the query column.

It is a simple matter to decode object_id’s like …

SELECT OBJECT_NAME(SomeNumber);

… where SomeNumber is the object id.

Reducing index count

Sometimes its hard to see the wood for the trees. With over 30 indexes on a table of 50 columns I searched for some graphical way to list the columns against each index so I could easily see a) indexes that were totally encapsulated in a larger one. And b) almost identical indexes where a column (or two) could be added to one so the smaller could be dropped.

Initially it was sp_BlitzIndex that named the tables with too many indexes. The results from which I ran in SentryOne’s Plan Explorer like … select * from dbo.order_items; … or whatever.

Some time later :), in the Index Analysis tab I was able to choose tics to show every column and hey presto! The exact graphical tool I wanted 🙂 And a bonus of an easy way to manipulate them.

But watch out! you need another tool to rank the read/write ratio of each index before you start making changes (I use my old ‘indexmaint’ script).

Optimizing Updates

To optimize a large UPDATE in a procedure I created an index to cover every column mentioned in it, as if it were a SELECT statement.

Removing all duplicate rows

Just recording here an update on the old ‘having’ way to remove duplicate rows

WITH cte AS (
SELECT SomeColumnName,
row_number() OVER(PARTITION BY SomeColumnName ORDER BY SomeColumnName) AS [rn]
from [SomeDatabaseName].[dbo].[SomeTableName]
)
select * from cte where [rn] > 1 -- #1 test
-- delete cte where [rn] > 1 -- #2 execute

Change Job notification Operators

I wanted to standardize job error notifications, so created an new operator called ‘DBA’ (with multiple email addresses).

This code semi-automates the process of updating all the jobs by listing them along with the code needed to change them …

-- ChangeNotifications.sql

Select
	J.name JobName,
	O.name OperatorName,
	O.email_address,
	'EXEC msdb.dbo.sp_update_job @job_name = N''' + J.[name] + ''', @notify_email_operator_name = ''DBA''' CommandToChangeIt
from msdb..sysjobs J
join msdb..sysoperators O
  on O.id = J.notify_email_operator_id
order by OperatorName desc;

Bulk Email Sender

I know this is not very “set” based but the built in function “sp_send_dbmail” can be a bit delicate. Similarly to give the function a rest I executed this procedure hourly from a SQL Job.

The brief was to send emails individually IE: no long strings of addresses.

The environment was mature, with a working Email Profile, a database and tables already in-place and holding HTML style emails ready to go.

-- SendEmailProcess.sql

USE [SomeDatabase]
GO

CREATE PROCEDURE [dbo].[SendEmailProcess]

		@Test varchar(100) = null

AS

/* clear out the sysmail tables */

	DELETE FROM [msdb].[dbo].[sysmail_allitems]
	
/* parameters */

	DECLARE @ID uniqueidentifier,
			@To varchar(100),
			@Subject varchar(255),
			@Html varchar(max),
			@Return int

/* start of loop */

	WHILE (SELECT COUNT(*)	
		   FROM [SomeDatabase].[dbo].[EmailMessage] EM
		   JOIN [SomeDatabase].[dbo].[Recipient] R 
			 ON EM.Id = R.EmailMessage_Id2
		   WHERE EM.[Status] = 'Submitted') > 0
	BEGIN

	/* get any one email message */

		SELECT TOP 1
			@ID = EM.ID, 
			@To = isnull(@Test, R.EmailAddress),
			@Subject = EM.[Subject],
			@Html = EM.HtmlContent
		FROM [SomeDatabase].[dbo].[EmailMessage] EM
		JOIN [SomeDatabase].[dbo].[Recipient] R 
		  ON EM.Id = R.EmailMessage_Id2
		WHERE EM.[Status] = 'Submitted';

	/* send it */

		EXEC [msdb].[dbo].[sp_send_dbmail]
			 @profile_name = 'BulkMail',
			 @recipients = @To,
			 @subject = @Subject,
			 @body = @Html,
			 @body_format = 'HTML';

	/* check it worked */

		SET @Return = @@error
		
	/* if it worked - mark it as Sent */
	
		IF @Return = 0
		BEGIN
			UPDATE [SomeDatabase].[dbo].[EmailMessage]
			SET [Status] = 'Sent'
			WHERE Id = @ID
		END

	/* if it failed - flag it and move on */

		IF @Return != 0 

	/* less-than greater-than does not work in WordPress */

		BEGIN
			UPDATE [SomeDatabase].[dbo].[EmailMessage]
			SET [Status] = 'Failed'
			WHERE Id = @ID
		END 
		
/* end of loop */

	END

GO

Splitting up a large MDF file

I wanted to break-up a single 500GB MDF file into 5 files of around 100GB each, so created 4 NDF files.

I set autogrowth to 1024 MB for the NDF files and OFF for the MDF file.

In a SQL Job I used code like …

DBCC SHRINKFILE (N'MDFLogicalFileName', EMPTYFILE);

Which after the second weekend left about 88GB in the MDF file.

--datafiles.sql

select Physical_Name,
	ROUND(CAST((size) AS FLOAT)/128,2) Size_MB,
	ROUND(CAST((FILEPROPERTY([name],'SpaceUsed')) AS FLOAT)/128,2) Used_MB,
	convert(int, ROUND(CAST((FILEPROPERTY([name],'SpaceUsed')) AS float)/128,2) / ROUND(CAST((size) AS float)/128,2) * 100) Used_Pct
FROM sys.database_files
where physical_name not like '%ldf%'
order by physical_name;

I cancelled and deleted the job.

Over the next 3 week nights I reduced its physical size to 300GB, 200GB, then 100GB using a Job step like …

DBCC SHRINKFILE (N'MDFLogicalFileName', 100000);

I set MDF autogrowth to match the NDF files, so the five would naturally balance (size wise) over time.

Lastly I set up a nightly job to rebuild the most fragmented indexes (Thanks again Ola).

Update Reporting from Live (part 1 of 2)

I can hardly believe I am revisiting my old “Smart Diff backup / restore” project from 2015 – in this – the age of the cloud!

This time around I chose to use native commands instead of Ola’s scripts as the architecture was simpler. And SQL Server 2008r2 (yes, I know).

To start, I made a SQL job on Live “Backup for Reporting” to create a local backup …

Step-1 “Jump to Diff or Full backup”

This step attempts to manage smart diff-backups. If it is Friday, or there is no current full backup, or if some 3rd party has taken a full backup (rendering our full backup obsolete), then a full backup is taken via step-2.

If none of the above are true, over an hour can be saved by jumping directly to Step-3.

(NOTE: Step-1 was set to go to Step-3 on success, or the next-step on failure).

/* 1. Do a FULL backup if its Friday */

	SET DATEFIRST 7 /* FirstDayOfWeek = Sunday, so Friday = 6 */
	IF (SELECT DATEPART(WEEKDAY, GETDATE())) = 6 /* its Friday */	
	BEGIN
		RAISERROR ('Force this Job-Step to Fail - Its Friday', 16, 1)
		Return
	END

/* 2. Do a FULL backup if there aren't any */
 
	DECLARE @backups char(1)
	EXEC @backups = [titan].[master].[sys].[xp_cmdshell] 'DIR /b X:\SQLBackups\CMI_ProDB_Audit_Live\*FULL.bak'
	IF (select @backups) > 0 /* 0 = found, 1 = not found */
	BEGIN
		RAISERROR ('Force this Job-Step to Fail - There are no FULL backups', 16, 1)
		Return
	END

/* 3. Do a FULL backup if the LSNs don't match */

	IF	(SELECT isnull(MAX(differential_base_lsn), 1)
		FROM [Titan].[master].[sys].[master_files]
		WHERE physical_name like '%Audit%')
		!=
		(SELECT isnull(MAX(differential_base_lsn), 1)
		FROM [SQLREPORTING].[master].[sys].[master_files]
		WHERE physical_name like '%Audit%')
	BEGIN
		RAISERROR ('Force this Job-Step to Fail - The LSNs do not match', 16, 1)
		Return
	END

/* Else jump to the DIFF-backup step */

	Select 'We can do a DIFF backup'
	Print 'We can do a DIFF backup'

Step-2 “Create a new FULL backup”

This step creates a Full backup locally, overwriting any that already exist. When complete a Diff backup is taken via step-3.

A Diff backup immediately after a Full backup is not strictly necessary, but ensures that a) the restore process can be simple and robust (IE: a Full backup is always restored, then a Diff backup is always restored), and b) that we never have a Diff backup older that the partnering Full backup.

BACKUP DATABASE [LiveDatabaseName] 

TO DISK =  N'V:\SQLBackups\LiveDatabaseName\LiveDatabaseName_FULL.bak'

WITH CHECKSUM, COMPRESSION, INIT;

Step-3 “Create a new DIFF backup”

This step creates a Diff backup (containing changes since the last Full backup), overwriting any Diff backup for this database that already exists.

BACKUP DATABASE [LiveDatabaseName]

TO DISK =  N'V:\SQLBackups\LiveDatabaseName\LiveDatabaseName_DIFF.bak'

WITH CHECKSUM, COMPRESSION, DIFFERENTIAL, INIT;

Step-4 “Start the Restore job”.

Kicks-off the restore job on the Report Server.

EXECUTE [ReportServerName].[msdb].[dbo].[sp_start_job] 'Restore LiveDatabaseName';

Next I will detail the steps in the “restore” job on the report server.

Update Reporting from Live (part 2 of 2)

… I created a SQL Job on the Reporting server called “Restore from Live”.

Step-1 “Kill any connections”

Before a database can be restored it needs to be unused. Removing connections in this way is more reliable then changing the database to “Single-user-mode”.

DECLARE @kill VARCHAR(8000) = '';

SELECT  @kill = @kill + 'kill ' + CONVERT(VARCHAR(5), spid) + ';'
	FROM [master].[dbo].[sysprocesses]
	WHERE dbid = DB_ID('LiveDatabaseName')
	AND spid > 50;

EXEC (@kill);

Step-2 “Full Restore”

This step restores a Full backup. Changing the database name and file locations as required.

When complete the database will be left in a “Restoring” state. *It can be brought online either by completing the next step or by manual recovery EG: “Restore Database [LiveDatabaseName] with Recovery;”.

RESTORE DATABASE [LiveDatabaseName] 

FROM DISK = N'\\LiveServerName\SQLBackups\LiveDatabaseName\LiveDatabaseName_FULL.bak' 

WITH NORECOVERY, REPLACE;

Step-3 “Diff Restore”

This step restores a Diff backup similar to the last step, however it brings the database back online after completion. If this step ever fails see * above.

RESTORE DATABASE [LiveDatabaseName] 

FROM DISK = N'\\LiveServerName\SQLBackups\LiveDatabaseName\LiveDatabaseName_DIFF.bak' 

WITH RECOVERY;

Step-4 “Switch to Simple Recovery Mode”

This step changes the database to Simple recovery mode where the log-files are re-used and do not require management (EG: regular log backups). This is appropriate for report servers where the data is already backed up from live “for recovery” (IE: outside of the backups detailed in these 2 posts).

ALTER DATABASE [LiveDatabaseName] set recovery SIMPLE;

Step-5 “Remove Orphans”

This deprecated command changes the password for the “mssql” login to match the password from the “mssql” user. Login passwords are not captured by backups.

sp_change_users_login 'auto_fix', 'mssql';

Footnote

I wrote these 2 jobs using minimal variables and dynamic SQL, and using a generous number of jobs and job-steps, in the hope that this will be robust and easy to manage. (And because I really like simplifying such things)

Removing unused databases

Here is my work-sheet for safely hiding databases from SSMS that I suspect are unused

-- DetachDB.sql

-- 1. List all attached databases with file paths

	SELECT db_name(database_id) [Database], Physical_Name
	FROM sys.master_files
	order by [Database]

-- 2. Create Attach Script for chosen db (accumulate history here)

	USE [master]; -- on some servername
	CREATE DATABASE xxx ON
	(FILENAME = 'D:\SQLData\xxx.mdf'),
	(FILENAME = 'D:\SQLLogs\xxx.ldf')
	FOR ATTACH;

	USE [master]; -- on some servername
	CREATE DATABASE Test ON
	(FILENAME = 'D:\SQLData\Test.mdf'),
	(FILENAME = 'D:\SQLLogs\Test_log.ldf')
	FOR ATTACH;

-- 3. Detatch Database

	USE [master];
	EXEC MASTER.dbo.sp_detach_db @dbname = N'xxx';

-- 4. To rollback, re-attach database (scripted in step-2)

GDPR for the DBA

I feel the GDPR is an attempt to enforce best-practice data husbandry. Mostly its rather good stuff, however there are a few bits that are vague or impractical.

SSRS: User defined Subscriptions failing

I had an issue in SSRS where user configured subscriptions would always fail to find their email address (an email server configuration problem probably).

My work-around was to allow users to type in (or paste in) their full email addresses.

To do this I first created a copy of the reporting services configuration file “c:\ Microsoft SQL Server\ MSRS11.MSSQLSERVER\ Reporting Services\ ReportServer\ rsreportserver.config”.

Then edited the original in two places …

1) I changed [SendEmailToUserAlias]True[/SendEmailToUserAlias] to False.
2) Then I inserted the name of the SMTP Server into the middle of [DefaultHostName][/DefaultHostName].

NOTE: To find the name of the SMTP Server I opened Reporting Services Configuration Manager, and navigated to “E-Mail Settings”.

Space Free

Central to my ‘Alert on low space’ job is this query, which is very handy by its self …

--spaceAlert.sql

select	volume_mount_point Drive, 
	cast(sum(available_bytes)*100 / sum(total_bytes) as int) as [Free%],
	avg(available_bytes/1024/1024/1024) FreeGB
from sys.master_files f
cross apply sys.dm_os_volume_stats(f.database_id, f.[file_id])
group by volume_mount_point
order by volume_mount_point;

 

GDPR Data Mapping

There is a fab new tool in SSMS 17.5 that helps with the GDPR spadework.

That is, the right of EU citizens to have their personal data deleted on request, enforceable from 25 May 2018.

To start right-click on a database, choose Tasks / Classify Data.

The wizard then searches the current database and attempts to classify table-columns into categories. For example a column called ‘mobile’ containing telephone numbers would be categorized as ‘contact Info’.

Then the wizard adds a sensitivity label (contact-info would be “Confidential – GDPR”)

Its a good idea to look at the actual data in a second screen whilst working down the recommendations list (in the first screen).

For each table-column you can accept / change / delete the recommendation.

Then, when you are done, you can save your work by clicking on “Accept selected recommendations”.

This is then saved within each databases system view called sys.extended_properties.

Be assured – that all selections can be changed / removed indefinitely, and that the tables / columns / data is not directly changed in any way.

The result, is a smart Report which can be printed or emailed out, demonstrating that you have it all under control 😉

TSQL Performance Rule #1

There’s no significance to this one being number one 🙂 its just the one I’ve just been thinking about 🙂 I may now have built this up a bit more than is warranted, so hope your not expecting too much from my number one performance rule. Oh, Ok then, here goes …

“All numbers in stored-procedures should be in single quotes”.

Even if they are defined as INT they could potentially force a VARCHAR to be converted to INT.

Consider WHERE SomeColumn = 42. Conversion presidency means VARCHAR’s will always be converted to INT’s never the other way around. The one numeric value above (42) could cause a million rows in the column (“SomeColumn”) to have to be converted to INT to be tested. Significantly affecting performance.

Consider WHERE SomeColumn = ’42’. “SomeColumn” is either numeric of non-numeric. If its INT for example then just one value (the ’42’) has to be converted to INT (taking no time at all). If “SomeColumn” is VARCHAR then there is no conversion.

Shrink TempDB

To shrink the TempDB MDF file (before adding additional NDF files for example), whilst retaining the total size of 50GB …

DBCC FREEPROCCACHE
USE [tempdb]
GO
DBCC SHRINKFILE (N'tempdev' , 8192)
GO

Exporting a Report to Excel

Finance wanted to export their reports into spread sheets but the company Logo and report Title were messing up the rendering.

To fix this I amended the SQL Server 2012 SSRS config file (called “rsreportserver.config“) after taking a copy.

The location of the config file was …

C:\Microsoft SQL Server\MSRS11.MSSQLSERVER\Reporting Services\ReportServer

I commented out the line … (please note: I have replaced the greater-than and less-than symbols with square brackets)

[Extension Name="EXCELOPENXML" Type="Microsoft.ReportingServices.Rendering.ExcelOpenXmlRenderer.ExcelOpenXmlRenderer,Microsoft.ReportingServices.ExcelRendering"/]

… and replaced it with these 7 lines …

[Extension Name="EXCELOPENXML" Type="Microsoft.ReportingServices.Rendering.ExcelOpenXmlRenderer.ExcelOpenXmlRenderer,Microsoft.ReportingServices.ExcelRendering"]
	[Configuration]
		[DeviceInfo]
			[SimplePageHeaders]True[/SimplePageHeaders]
		[/DeviceInfo]
	[/Configuration]
[/Extension]

To use this, I moved the report Logo and Title into a heading-block within the reports.

** UPDATE **

On another occasion I was unable to make this global change and resorted to making changes within individual reports.

The method was to right-click on the items to be hidden, choose properties, then Visibility. I pasted this expression into the appropriate box …

=IIF(Globals!RenderFormat.Name = "EXCELOPENXML", True, False)

 

Add a Column to a Report Model

I had the odd request to add an extra column to a SQL 2008r2 “Report Model”. I had never heard of one of those, but it turned-out to be a type of amalgamated data-source that the users created there own ad-hock reports from (via Report Builder 1).

To add the extra column I just added it to the SQL View (which was named in the amalgamated data-source definition). Then to refresh the Report Model I downloaded it and uploaded it with a new name.

Report-Builder Cube Data-Source

Had a tricky situation connecting Report Builder 3 to a cube. I was able to copy the connection string from withing SSDT but it still would not work.

I will use “Adventure Works” for illustration.

The solution was in the error message “Either the user, does not have access to the AdventureWorksDW2012 database, or the database does not exist.”

It turned out the database did not exist … as its SSMS Database-Engine name (“AdventureWorksDW2012”).

Connecting SSMS to Analysis Services however showed a different name “AdventureWorksDW2012Multidimensional-EE”

Plugging this into my connection string (with Data Source Type being Analysis services, and Connect Using being Windows Integrated Security) worked eg:-

Provider=SQLNCLI11.1;
Data Source=(ServerName\InstanceName);
Integrated Security=SSPI;
Initial Catalog=AdventureWorksDW2012Multidimensional-EE

Annoyingly (grrr!), I found just removing the Initial Catalog worked also (ah bah).

Update Statistics on a whole database

Whilst performance tuning an SSRS report server I wanted to update all the statistics within the two databases ‘ReportServer’ and ‘ReportserverTempDB’.

I chose a simply-coded, two step method (for safety and to keep control).

First I generated the commands (per database) …

Select 'UPDATE STATISTICS [' + [name] + '] WITH FULLSCAN' 
from reportserver.sys.tables
Select 'UPDATE STATISTICS [' + [name] + '] WITH FULLSCAN' 
from reportservertempdb.sys.tables

… before executing them in a separate session.

SQL Refactor steps

I inherited a big horrible stored procedure (sp) created by unioning two other sp’s together to make a big mess.

This sp was feeding a report, and although it was returning the correct data, was taking a significant time to do so.

These are the steps I took to tame it …

1. Change the sp code into a query
2. Save a copy of the original, because you never know 🙂
3. Save the code you are working on (regularly).
3. Run it. Save the output and record the number of rows returned.
4. Expand all views, and views of views, etc
4b. run it after each change – be pedantic – then save.
5. Remove all commented-out lines of code
6. Comment-out any select items (in the outer select) not needed by the report.
7. Working down from the top, comment-out any Select items in sub queries not used by the parent.
8. Save all sub-queries into temp-tables.
9. Tune the slowest sub-query

Only emailing a spreadsheet when it contains data.

This was a new one to me! A user subscribed to an SSRS Report but only wanted to receive the email if the report contained data.

There seems to be loads of write-ups on the web about how to set this up. Here is how it went for me …

Firstly, I created the stored-procedure that would return rows from a live table that contained (business logic) errors (EG: “rep_exceptions”).

I created the report and subscribed to it (making sure to paste in my actual email address, not my login).

In the subscription form I specified both a start-date and an end-date that were in the past (ensuring the subscription would never actually fire, and ‘OK’ed the form.

Within SSMS “Job Activity Monitor” I located the job that had been created by this subscription (I looked for jobs that had never run, and then matched the start / end dates in the schedule with those I used in the subscription-form)

I copied the action statement from the job-step eg:

EXECUTE [ReportServer].dbo.AddEvent @EventType='TimedSubscription', @EventData='5ec60080-b30c-4cfc-8571-19c4ba59aaac'

… into a new job. Then augmented it, to only run if there was data …

EXECUTE [ServerName].[dbo].[rep_exceptions]

if @@ROWCOUNT > 0

EXECUTE [ReportServer].dbo.AddEvent @EventType='TimedSubscription', @EventData='5ec60080-b30c-4cfc-8571-19c4ba59aaac'

I scheduled the new-job to run every 8am.

Once this was all tested and working – I changed the SSRS subscription email address to the user (and left me as CC temporarily).

Adding ‘All’ to a report drop-down list

There is a problem when you configure a report parameter to ‘Accept Multiply Items’, in that it won’t work with a stored procedure (SP).

One work-around is to only Accept Single Items, but make one of them ‘All’, like this …

1) Create a separate SP to populate the parameter and add an ‘All’ option …

SELECT 'All' CustId
UNION
SELECT DISTINCT CustomerId
FROM SomeTable
ORDER BY CustId

2) Then amend the main SP by joining to SomeTable, and adding a CASE statement in the WHERE clause, like so …

WHERE SomeTable.CustomerId = CASE WHEN @CustId != 'All' THEN @CustId ELSE SomeTable.CustomerId END

Which translates as …

WHERE SomeTable.CustomerId = SomeTable.CustomerId

… when ‘All’ is selected (which will let everything through), or …

WHERE SomeTable.CustomerId = @CustId

… where ‘All’ is Not selected.

This allows the user to select All or a single value.

However, if you need to select more than one value but not all values – you will need another approach. You will need to use a split-string function.

Powershell: sort by a column name that contains a space

By trial and error I found the answer is to just remove the space. For example to sort by the column “Recovery Model” …

import-module SQLPS; cls
$smoserver = new-object microsoft.sqlserver.management.smo.server
$smoserver.databases | sort-object RecoveryModel

The secret of Performance Tuning

Sure you can make a list of rules, or follow someone else’s list. But that’s just the junior stuff. The stuff that will be automated soon.

The secret of proficient performance tuning is just one word. And you wont like it.

Knowing this word wont instantly help you. You studied technology in the first place to get away from this messy word. But here it is again. Creativity. Sorry.

Memorising those rules is like learning the alphabet. No one will pay you for that.

MDX Commandments

1) Always use fully qualified names.

selecting from ‘2016’ may work initially as the engine returns the first object it finds with this name (which just so happens to be in the calendar dimension).

However after a time there may be a customer-number ‘2016’ (in the customer dimension) that will be returned erroneously.

2) A Tuple marks the co-ordinates of the data to be returned.

A list of comma separated dimensions that intersect on the required data. EG:
(customer 123, date 10/07/2016, sales-item 456).

3) A Set is a list of related objects

EG:{date 10/07/2016, date 11/07/2016, date 12/07/2016}. A Set can be a list of Tuples.

Re run Subscriptions

When an email server failed overnight I used this script to generate the commands to re-send reports to subscribers …

-- RerunFailedSubscriptions.sql

-- generate the commands to resend report-emails that failed for some reason

select s.LastStatus, LastRunTime, 'exec sp_start_job @job_name = ''' + cast(j.name as varchar(40)) + '''' [Command To Re-run the Job]
from msdb.dbo.sysjobs j  
join  msdb.dbo.sysjobsteps js on js.job_id = j.job_id 
join  [ReportServer$REPORTS].[dbo].[Subscriptions] s  on js.command like '%' + cast(s.subscriptionid as varchar(40)) + '%' 
where s.LastStatus like 'Failure sending mail%';

Using “Select All” into a dynamic query

After changing a report Select statement into dynamic-sql I found “Select All” no longer worked for my “Customers” parameter.

To fix this, in Dataset/Properties/Parameters I changed the Parameter Value to

=join(Parameters!customer.Value,"','")

To translate … after the “.Value” there is … comma, double-quote, single-quote, comma, single-quote, double-quote, close-bracket. So that in the query each value would be surrounded by its own single quotes
simples 🙂

A Report that uses its location

A customer had many copies of the same report – one for each site. And the application database had multiple copies of each table – one for each site. The table names were all appended with the sites name EG: “WP London_SalesHeader”, “WP Barcelona_SalesHeader” etc.

In Report Manager there was a folder for each site, containing sub-folders for different categories of report (EG: /London/Sales, OR /London/Production).

This is a simplified account of how I created a Report that returned only information specific to its location.

In Report Builder I created a hidden parameter called @site of type Text with no “Available Values” and its “Default Values” using the global variable ReportFolder.

As the output from this built-in variable would be like “\Paris\Sales” I had to create an expression for the “Default Value” of @site searching through each site name in turn …

=IIf(Globals!ReportFolder.Contains("Barcelona"),"WP Barcelona",
IIf(Globals!ReportFolder.Contains("Paris"),"WP Paris", "WP London"))

Finally, in the report query I derived the table name using the @site parameter.

declare @cmd varchar(max) = 
'select	[SalesCode],
	[Description],
	[NetWeight],
	[SalesDate]
from	[Production].[dbo].[' + @site + '_SalesHeader]'

exec(@cmd)

(NB: As a best-practice I displayed the value of @site, along with the other parameter choices, in the report sub-title.)

Report Server log – days kept

I viewed our Report Configurations by running this against the Report Server database

Select * from dbo.ConfigurationInfo;

I then changed the number of days the log retained from the default 60 like this …

Update ConfigurationInfo SET Value='366' Where NAME='ExecutionLogDaysKept';

Gauge auto scale

The gauge’s in SSRS do not have an ‘auto’ option like the charts do.  I wanted the ‘min-scale-vale’ on the 500 boundary below the lowest value and the ‘max-scale-value’ that was on the 500 boundary above the maximum value.

For example …

ytd-Balance / ytd-Budget / min-scale / max-scale

865 / 1022 / 500 / 1500

2690 / 325 / 0 / 3000

5346 / 7453 / 5000 / 7500

… here’s the expressions …

=iif(Fields!bal.Value<Fields!bud.Value, Int(Round(Fields!bal.Value/1000/1000,2)*2)*500,
iif(Fields!bal.Value>Fields!bud.Value, Int(Round(Fields!bud.Value/1000/1000,2)*2)*500,nothing))
=iif(Fields!bal.Value<Fields!bud.Value, Int(Round(Fields!bud.Value/1000/1000,2)*2+1)*500,
iif(Fields!bal.Value>Fields!bud.Value,Int(Round(Fields!bal.Value/1000/1000,2)*2+1)*500,nothing))

Recover deleted chart axis

I deleted my Chart date/time axis for a cleaner look – but then changed my mind .

capture

Within Report Builder 3 I recovered it by

  1. making the ‘Properties’ pane visible …

capture

2. Clicking on the chart, then in the Properties pane navigating to Chart, Chart Areas …

capture

3. I clicked on the ellipses (“…”) which brought up a chart-area properties box. Again I clicked the ellipses (next to Axis, Category Axis) …

capture

4. Then changed the Visibility to ‘True’ …

capture

 

 

Unstructured Data

How useful is unstructured data really?

In a train station there may be a timetable, an arrivals board, leaves on the line, a chalk board saying you can get a cake half price with a coffee, tannoy announcements apologizing for a delay caused by an earlier signaling error, and a memorable picture from a new movie. Discuss …

Collecting Wait Stats

Based on the work of GS, here is my script to create a Job that collects wait stats every 15 minutes.

--CreateJob_DBA_CollectWaitStats.sql

USE [msdb]
GO

IF  EXISTS (SELECT job_id FROM msdb.dbo.sysjobs_view WHERE name = N'DBA_CollectWaitStats')
EXEC msdb.dbo.sp_delete_job @job_name=N'DBA_CollectWaitStats', @delete_unused_schedule=1
GO

USE [msdb]
GO

BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'Database Maintenance' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'Database Maintenance'
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback
END

declare 
	@today varchar(50) = (select convert(varchar, getdate(), 112)),
	@nextweek varchar(50) = (select convert(varchar, getdate()+8, 112)),
	@dbname varchar(50) = 'master' --<<<<<>>>>>>>>

DECLARE @jobId BINARY(16)
EXEC @ReturnCode =  msdb.dbo.sp_add_job @job_name=N'DBA_CollectWaitStats', 
		@enabled=1, 
		@notify_level_eventlog=0, 
		@notify_level_email=0, 
		@notify_level_netsend=0, 
		@notify_level_page=0, 
		@delete_level=0, 
		@description=N'Collects wait stats for performance tuning.', 
		@category_name=N'Database Maintenance', 
		@owner_login_name=N'sa', @job_id = @jobId OUTPUT
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Create the table', 
		@step_id=1, 
		@cmdexec_success_code=0, 
		@on_success_action=3, 
		@on_success_step_id=0, 
		@on_fail_action=3, 
		@on_fail_step_id=0, 
		@retry_attempts=0, 
		@retry_interval=0, 
		@os_run_priority=0, @subsystem=N'TSQL', 
		@command=N'create table [dbo].[WaitStats] 
(
	WaitType nvarchar(60) not null,
	NumberOfWaits bigint not null,
	SignalWaitTime bigint not null,
	ResourceWaitTime bigint not null,
	SampleTime datetime not null
)', 
		@database_name=@dbname, 
		@flags=0
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Collect current waits', 
		@step_id=2, 
		@cmdexec_success_code=0, 
		@on_success_action=1, 
		@on_success_step_id=0, 
		@on_fail_action=2, 
		@on_fail_step_id=0, 
		@retry_attempts=0, 
		@retry_interval=0, 
		@os_run_priority=0, @subsystem=N'TSQL', 
		@command=N'INSERT INTO [dbo].[WaitStats]
SELECT  wait_type as WaitType,
        waiting_tasks_count AS NumberOfWaits,
        signal_wait_time_ms AS SignalWaitTime,
        wait_time_ms - signal_wait_time_ms AS ResourceWaitTime,
        GETDATE() AS SampleTime
FROM    sys.dm_os_wait_stats
WHERE [wait_type] NOT IN (
	N''BROKER_EVENTHANDLER'', N''BROKER_RECEIVE_WAITFOR'',
	N''BROKER_TASK_STOP'', N''BROKER_TO_FLUSH'',
	N''BROKER_TRANSMITTER'', N''CHECKPOINT_QUEUE'',
	N''CHKPT'', N''CLR_AUTO_EVENT'',
	N''CLR_MANUAL_EVENT'', N''CLR_SEMAPHORE'',
	N''DBMIRROR_DBM_EVENT'', N''DBMIRROR_EVENTS_QUEUE'',
	N''DBMIRROR_WORKER_QUEUE'', N''DBMIRRORING_CMD'',
	N''DIRTY_PAGE_POLL'', N''DISPATCHER_QUEUE_SEMAPHORE'',
	N''EXECSYNC'', N''FSAGENT'',
	N''FT_IFTS_SCHEDULER_IDLE_WAIT'', N''FT_IFTSHC_MUTEX'',
	N''HADR_CLUSAPI_CALL'', N''HADR_FILESTREAM_IOMGR_IOCOMPLETION'',
	N''HADR_LOGCAPTURE_WAIT'', N''HADR_NOTIFICATION_DEQUEUE'',
	N''HADR_TIMER_TASK'', N''HADR_WORK_QUEUE'',
	N''KSOURCE_WAKEUP'', N''LAZYWRITER_SLEEP'',
	N''LOGMGR_QUEUE'', N''MEMORY_ALLOCATION_EXT'',
	N''ONDEMAND_TASK_QUEUE'',
	N''PREEMPTIVE_XE_GETTARGETSTATE'',
	N''PWAIT_ALL_COMPONENTS_INITIALIZED'',
	N''PWAIT_DIRECTLOGCONSUMER_GETNEXT'',
	N''QDS_PERSIST_TASK_MAIN_LOOP_SLEEP'', N''QDS_ASYNC_QUEUE'',
	N''QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP'',
	N''QDS_SHUTDOWN_QUEUE'',
	N''REQUEST_FOR_DEADLOCK_SEARCH'', N''RESOURCE_QUEUE'',
	N''SERVER_IDLE_CHECK'', N''SLEEP_BPOOL_FLUSH'',
	N''SLEEP_DBSTARTUP'', N''SLEEP_DCOMSTARTUP'',
	N''SLEEP_MASTERDBREADY'', N''SLEEP_MASTERMDREADY'',
	N''SLEEP_MASTERUPGRADED'', N''SLEEP_MSDBSTARTUP'',
	N''SLEEP_SYSTEMTASK'', N''SLEEP_TASK'',
	N''SLEEP_TEMPDBSTARTUP'', N''SNI_HTTP_ACCEPT'',
	N''SP_SERVER_DIAGNOSTICS_SLEEP'', N''SQLTRACE_BUFFER_FLUSH'',
	N''SQLTRACE_INCREMENTAL_FLUSH_SLEEP'',
	N''SQLTRACE_WAIT_ENTRIES'', N''WAIT_FOR_RESULTS'',
	N''WAITFOR'', N''WAITFOR_TASKSHUTDOWN'',
	N''WAIT_XTP_RECOVERY'',
	N''WAIT_XTP_HOST_WAIT'', N''WAIT_XTP_OFFLINE_CKPT_NEW_LOG'',
	N''WAIT_XTP_CKPT_CLOSE'', N''XE_DISPATCHER_JOIN'',
	N''XE_DISPATCHER_WAIT'', N''XE_TIMER_EVENT'')
AND	[waiting_tasks_count] > 0

', 
		@database_name=@dbname,
		@flags=0
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N'Every 15 mins for a week', 
		@enabled=1, 
		@freq_type=4, 
		@freq_interval=1, 
		@freq_subday_type=4, 
		@freq_subday_interval=15, 
		@freq_relative_interval=0, 
		@freq_recurrence_factor=0, 
		@active_start_date=@today, 
		@active_end_date=@nextweek, 
		@active_start_time=100, 
		@active_end_time=235959, 
		@schedule_uid=N'5b0842fe-8f80-44e9-8a09-aac6ce5c2b2e'
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
    IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

GO

My Hekaton template

--hek_template.sql

-- 1) add filegroup (if not already there)

USE [master]
GO
begin try
ALTER DATABASE [DemoDW] ADD FILEGROUP [MemoryOptimizedFG] CONTAINS MEMORY_OPTIMIZED_DATA
end try begin catch end catch

-- 2) add filestream file into filegroup (unless already done)

USE [master]
GO
begin try
ALTER DATABASE [DemoDW] ADD FILE ( NAME = N'DemoDB_hek', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\DemoDB_hek' ) TO FILEGROUP [MemoryOptimizedFG]
end try begin catch end catch

-- 3) remove table if it already exists

USE [DemoDW]
GO
begin try
drop table recentsales
end try begin catch end catch

-- 4) create memory optimized table (hekaton!)

USE [DemoDW]
GO
create table recentsales
(
ItemID int NOT NULL PRIMARY KEY NONCLUSTERED HASH (ItemID) with (BUCKET_COUNT=1024),
[Name] varchar NOT NULL,
Price decimal NOT NULL
)
with (MEMORY_OPTIMIZED = ON,
DURABILITY = SCHEMA_AND_DATA
)

 

PostgreSQL Terms

A Cluster is a single, complete, running, PostgreSQL server (IE: cluster of databases)

  • One PostgreSQL Server
  • Listening on one port (may be multiple addresses)
  • One set of data files (including tablespaces)
  • One set of Write Ahead Log

Operations done on a Cluster:

  • Initialization (initdb)
  • Start / Stop the cluster
  • File-level Backup / Restores
  • Streaming Replication

Objects defined at Cluster level

  • Users / Roles
  • Tablespaces
  • Databases

PostgreSQL – Administration

Maintenance Tasks

  • Keep autovacuum enabled most of the time
  • VACUUM regularly as well
  • Check for unused indexes

Warnings: (unless you know what you are doing …)

  • Avoid using VACUUM FULL
  • REINDEX CONCURRENT does not exist (yet)
  • Do not use HASH INDEXES
  • Do not use fsync = off

RTFM

  • PostgreSQL docs are about 2000 pages
  • Technically accurate
  • Written and maintained by the developers

Security

  • Superuser is too powerful for most use cases (SECURITY DEFINER functions)
  • Use a distinct userid for replication
  • GRANT minimal access rights

Upgrades

  • Maintenance releases happen about every 3 months
  • For best security – upgrade to latest maintenance release
  • Major release upgrades are harder
  • UDR technology (thats UniDirectional Replication) will make Major release upgrades much easier from 9.4+

Extensions

  • PostgreSQL is designed to be extensible
  • Many new features enabled via extensions (EG: pgaudit, postgis)
  • Use them!

Scripts

  • GUIs do not allow you to apply changes in a transaction or easily record your actions
  • Use scripts for any administrative changes
  • Test them, before applying

Schema Change: adding a foreign key now split into two parts :-

  • ALTER TABLE foo ADD FOREIGN KEY … REFERENCES bar NOT VALID;
  • ALTER TABLE foo VALIDATE CONSTRAINT fook;
  1. Apply constraint going forward (with quick write-lock)
  2. Check data already in the table (a background task)

 

PostgreSQL study notes – psql

Features

  1. non-interactive usage
  2. command history (up/down arrow)
  3. tab completion (sans Windows)
  4. commands terminate with semi-colon and can wrap lines
  5. defaults to supplying currently logged-in username as pg use

Tasks

  1. explore ‘psql’
    a. ‘psql –version’ returns version of postgresql client
    b. ‘psql -l -U postgres’ lists installed db’s then exits.
    postgreSQL installs 3 default db’s
    1. ‘postgres’ – management db. contains user accounts, global settings, etc
    2. ‘template0’ – vanilla read-only db
    3. ‘template1’ – changable copy of template0, used as template for new db’s
    c. ‘psql’ with no options enters interactive mode (duh)
    a hash-mark ending the interactive-prompt denotes a ‘superuser’ eg: ‘postgres=#’
    d. ‘\h’ – returns sql-specific help eg: ‘ALTER TABLE …’
    ‘\h create’ filters above to just ‘CREATE …’ commands
    e. ‘\?’ – returns ‘psql’ specific help eg: ‘\?’ ie: usable metasequences ie shortcuts
    f. ‘\l’ – returns list of DBs. ‘\l+’ additionally returns DB sizes
    g. ‘\du[+]’ – returns list of users with access to postgresql.
    h. ‘\!’ open shell from session (‘exit’ from shell = back to psql)
    i. ‘\! [command] – runs command in shell non-interactively and returns to psql.
    j. ‘\i filename’ – execute command(s) in the file ie psql or sql commands
    k, multiple commands can be run on one line. Separate with space, terminate with semi-colon (eg: ‘\l \du;’)
    L: ‘\c’ – connect to another database or host eg: ‘\c template1’
    m. ‘\d’ list tables\views etc in current DB, ‘\dS’ list system tables, ‘\dS+’ list system tables with sizes.
    n, \q quit
  2. c. ‘psql –help’ (or ‘psql -?’) psql option switches & defaults (short version then long version) eg: -U (username – short version), -l (list – short version), –version (long version) …

 

PostgreSQL study notes – Installation

Download from Enterprise DB, gui is the one to go for, remember xhost on linux

1. Install. For prod you should indicate that data files are stored independently of the source tree. A ‘Cluster’ is not a classic Cluster IE: a Server Cluster, just means all the databases on this particular box.

2. Explore the footprint of the install. ‘\\bin\ contains utilities like psql.exe (terminal monitor). \\data\  contains all databases and 3x config files & pg_log/ (log files), pg_xlog/ (write ahead log folder). postmaster.opts (startup options)

3. Provide access to internal docs via web-browser bookmark eg: file:///C:/Program%20Files/PostgreSQL/9.5/doc/postgresql/html/index.html

4. Add \\bin file to path – for psql.exe etc (eg: C:\Program Files\PostgreSQL\9.5\bin). posqlgresql clients default to submitting the current logged-in users name as the DB user name (eg: “psql” without -U will assum user is windows-user).

5. Add system variable ‘PGUSER=postgres’ workaround so psql etc wont try to login to utilities as o/s-user

 

More thoughts against Triggers

For Rob & Karl – Triggers run outside of transactions. An insert that fires a trigger may be rolled back, but the trigger rolls on.

Triggers introduce a long-term maintenance headache. You can read a stored-procedure from top to bottom and imagine you understand what it does. But unless you examine every tables it touches – you don’t. Little bits of code may be running silently which augment or even reverse some of the logic within the stored-procedure.

Triggers are used by lazy developers to ‘bolt on’ new features to applications, rather than track-down all the code that could insert/update/delete from a table and add the code (or a link to it) there.

This would be forgivable if the application code was closed or propitiatory, but never when the application is open to the application developer, who just cannot be bothered to integrate code changes properly, and cares not-a-jot about long-term maintenance headaches.

(slow breaths, slow breaths)

PostgreSQL study notes – Features

  1. Object Relational Database Management System (ORDBMS) objects (eg: tables) can be related in a Hierarchy: Parent -> Child
  2. Transactional RDBMS: SQL statements have implicit: BEGIN; COMMIT: statements. SQL statements may also have explicit BEGIN COMMIT statements
  3. developed at UC Berkeley like along with bsd-unix
  4. One process per connection: master process = “postmaster” auto-spawns per new connection
  5. Processed (pid’s) use one cpu-core per connection: o/s may spawn new connections on a different cpu-core. no cross-core queries
  6. Multiple helper processes, which appear as ‘postgres’ instances, always running eg: stats collector, background writer (protect agains sudden failure ), auto-vacuum (cleanup/ space reclaimer), wal sender (that’s write ahead log)
  7. max db size: unlimited – limited by available storage.(terabytes, perabytes, exabytes)
  8. max table size: 32tb – stored as multiple 1gb files – changable (could be prob for some o/s’s)
  9. max row size: 400gb
  10. max column size: 1gb (per row ie: per field)
  11. max indexes per table: unlimited
  12. max identifier length (db objects – tables, columns, triggers, functions@ 63 bytes. this is extensible via source code
  13. default listener tcp port 5432. so may install postgresql as non-privilaged user
  14. users are distinct from o/s users
  15. users are authenticated globally (per server), then assigned permissions per database.
  16. inheratance. tables lower in hierachy may inherit columns from heigher tables (ie parents) so long as no contraints eg foreign keys.
  17. case insensative commands – without double quotes (eg: select * from syslog;)
  18. case sensative commands – with double quotes – (eg: select * from “syslog”;)
  19. three primary config files, located in postgres-root A.pg_hba.conf (host based access) B.postgresql.conf general settings C. pg_ident.conf – user mappings
  20. integrated log rotation (config by age or size)

Capitalization

Scripting languages are wonderful things. They use subsets of English and are therefore easy to learn (EG: update, delete, get, put).

However capitalization is totally redundant. A capital letters marks the beginning of a sentence, but scripts do not use sentences.

One alphabet is enough (a to z) who needs another one (A to Z) that is completely equivalent?

Imagine responding to the adhoc query “Please email me a list of most ordered items over Christmas” with “in what font?” lol

Although … one trick that I have used over the years, with having two different ways to express the same data,  is to temporarily change the capitalization of text to double-check for myself that Replication etc is actually working (before the days of tracer-token poo sticks). Changing the data look without changing the data value.

Another sneaky trick is making mass changes to data whilst adding a flag (to only the changed data). For example changing every field containing ‘unsubscribe’ to ‘uNsubscribe’, or ‘yes’to ‘yEs’.

And then repeating with un-flagged fields until only ‘unsuscribe’ or ‘Yep’ remain (lol).

This (typical DBA belt & braces) method almost always guarantees you will not induce any unintended processing errors further down the line, as the data always remains the same length, type and meaning.

 

Senior Consultant Motivations

Autonomy, Mastery, and Purpose.

  1. To prioritize their own time.
  2. To master their profession.
  3. To do important work that matters.

SQL Deadlock graph – arrows

In a SQL deadlock graph the direction of the arrows is an interesting thing.

pic02

With my mechanistic head on, I am imagining it as …

  1. Spid-a requested a lock, and then got a lock (a two-way trip)
  2. Spid-b requested a lock, and then got a lock (arrow ends-up pointing at spid)
  3. Spid-a requested a lock, and is waiting (a one-way thing)
  4. Spid-b requested a lock, and is waiting (arrow pointing away from spid)

Capture Deadlock Graph using Profiler

To Capture a Deadlock Graph using Profiler (NB: with SQL 2008 and above you can also use an extended-event).

  • File / New trace
  • Connection details
  • Use Template / Blank
  • Events Selection / Locks ..1) DeadlockGraph 2) Lock:Deadlock 3) Lock:Deadlock Chain
  • Event Extraction Settings / Save Deadlock XML events seperately / (somefilename)
  • Each deadlock in a distinct file
  • All Deadlocks
  • Run
  • (wait)
  • File / Export / Extract SQL Server Events / Extract deadlock Events / (somefilename2)

Log-shipping Restore error: empty file

I noticed a log-shipping RESTORE job had started failing. Looking back through the job history I found the last two “good” executions contained errors …

*** Error: Could not apply log backup file ‘SomePath\SomeFile.trn’ to secondary database. The volume is empty.

I looked at the path\file specified and found the file was zero size.

I looked on the network-share where the files are backed-up-to \ copied-from, and found the same file was NOT zero size.

I manually copied the file from the network-share to the destination folder (on the DR server), overwriting the empty file.

Log-shipping recovered over the next few hours.