Greetings!

Welcome to my technical blog.

I want to collect here the first-hand experiences, tips, and best-practices that I use frequently. I hope you find something here of interest to you🙂.

There is no ‘standard’ to my posts. Some are blindingly obvious – where I just had to note something down that tripped me up. Other posts (hopefully) will be deeper.

Mostly I write in the first-person “I did this and it worked”, rather than “if you do this it may work if your environment it similar to mine”. In the hope that you can adapt my succinct notes (that ACTUALLY worked), to your situation.

Report Builder 3: Self Service

The security setting needed to allow Business users to use Report Builder safely are quite tricky.

Unstructured Data

How useful is unstructured data really?

In a train station there may be a timetable, an arrivals board, 205 leaves on the line, a chalk board saying you can get a cake half price with a coffee, tannoy announcements apologising for a delay caused by an earlier signaling error, and a memorable picture from a new movie.

Collecting Wait Stats

Bases on the work of GS, here is my script to create a Job that collects wait stats every 15 minutes.

--CreateJob_DBA_CollectWaitStats.sql

USE [msdb]
GO

IF  EXISTS (SELECT job_id FROM msdb.dbo.sysjobs_view WHERE name = N'DBA_CollectWaitStats')
EXEC msdb.dbo.sp_delete_job @job_name=N'DBA_CollectWaitStats', @delete_unused_schedule=1
GO

USE [msdb]
GO

BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'Database Maintenance' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'Database Maintenance'
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback
END

declare 
	@today varchar(50) = (select convert(varchar, getdate(), 112)),
	@nextweek varchar(50) = (select convert(varchar, getdate()+8, 112)),
	@dbname varchar(50) = 'master' --<<<<<<>>>>>>>>>

DECLARE @jobId BINARY(16)
EXEC @ReturnCode =  msdb.dbo.sp_add_job @job_name=N'DBA_CollectWaitStats', 
		@enabled=1, 
		@notify_level_eventlog=0, 
		@notify_level_email=0, 
		@notify_level_netsend=0, 
		@notify_level_page=0, 
		@delete_level=0, 
		@description=N'Collects wait stats for performance tuning.', 
		@category_name=N'Database Maintenance', 
		@owner_login_name=N'sa', @job_id = @jobId OUTPUT
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Create the table', 
		@step_id=1, 
		@cmdexec_success_code=0, 
		@on_success_action=3, 
		@on_success_step_id=0, 
		@on_fail_action=3, 
		@on_fail_step_id=0, 
		@retry_attempts=0, 
		@retry_interval=0, 
		@os_run_priority=0, @subsystem=N'TSQL', 
		@command=N'create table [dbo].[WaitStats] 
(
	WaitType nvarchar(60) not null,
	NumberOfWaits bigint not null,
	SignalWaitTime bigint not null,
	ResourceWaitTime bigint not null,
	SampleTime datetime not null
)', 
		@database_name=@dbname, 
		@flags=0
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Collect current waits', 
		@step_id=2, 
		@cmdexec_success_code=0, 
		@on_success_action=1, 
		@on_success_step_id=0, 
		@on_fail_action=2, 
		@on_fail_step_id=0, 
		@retry_attempts=0, 
		@retry_interval=0, 
		@os_run_priority=0, @subsystem=N'TSQL', 
		@command=N'INSERT INTO [dbo].[WaitStats]
SELECT  wait_type as WaitType,
        waiting_tasks_count AS NumberOfWaits,
        signal_wait_time_ms AS SignalWaitTime,
        wait_time_ms - signal_wait_time_ms AS ResourceWaitTime,
        GETDATE() AS SampleTime
FROM    sys.dm_os_wait_stats
WHERE [wait_type] NOT IN (
	N''BROKER_EVENTHANDLER'', N''BROKER_RECEIVE_WAITFOR'',
	N''BROKER_TASK_STOP'', N''BROKER_TO_FLUSH'',
	N''BROKER_TRANSMITTER'', N''CHECKPOINT_QUEUE'',
	N''CHKPT'', N''CLR_AUTO_EVENT'',
	N''CLR_MANUAL_EVENT'', N''CLR_SEMAPHORE'',
	N''DBMIRROR_DBM_EVENT'', N''DBMIRROR_EVENTS_QUEUE'',
	N''DBMIRROR_WORKER_QUEUE'', N''DBMIRRORING_CMD'',
	N''DIRTY_PAGE_POLL'', N''DISPATCHER_QUEUE_SEMAPHORE'',
	N''EXECSYNC'', N''FSAGENT'',
	N''FT_IFTS_SCHEDULER_IDLE_WAIT'', N''FT_IFTSHC_MUTEX'',
	N''HADR_CLUSAPI_CALL'', N''HADR_FILESTREAM_IOMGR_IOCOMPLETION'',
	N''HADR_LOGCAPTURE_WAIT'', N''HADR_NOTIFICATION_DEQUEUE'',
	N''HADR_TIMER_TASK'', N''HADR_WORK_QUEUE'',
	N''KSOURCE_WAKEUP'', N''LAZYWRITER_SLEEP'',
	N''LOGMGR_QUEUE'', N''MEMORY_ALLOCATION_EXT'',
	N''ONDEMAND_TASK_QUEUE'',
	N''PREEMPTIVE_XE_GETTARGETSTATE'',
	N''PWAIT_ALL_COMPONENTS_INITIALIZED'',
	N''PWAIT_DIRECTLOGCONSUMER_GETNEXT'',
	N''QDS_PERSIST_TASK_MAIN_LOOP_SLEEP'', N''QDS_ASYNC_QUEUE'',
	N''QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP'',
	N''QDS_SHUTDOWN_QUEUE'',
	N''REQUEST_FOR_DEADLOCK_SEARCH'', N''RESOURCE_QUEUE'',
	N''SERVER_IDLE_CHECK'', N''SLEEP_BPOOL_FLUSH'',
	N''SLEEP_DBSTARTUP'', N''SLEEP_DCOMSTARTUP'',
	N''SLEEP_MASTERDBREADY'', N''SLEEP_MASTERMDREADY'',
	N''SLEEP_MASTERUPGRADED'', N''SLEEP_MSDBSTARTUP'',
	N''SLEEP_SYSTEMTASK'', N''SLEEP_TASK'',
	N''SLEEP_TEMPDBSTARTUP'', N''SNI_HTTP_ACCEPT'',
	N''SP_SERVER_DIAGNOSTICS_SLEEP'', N''SQLTRACE_BUFFER_FLUSH'',
	N''SQLTRACE_INCREMENTAL_FLUSH_SLEEP'',
	N''SQLTRACE_WAIT_ENTRIES'', N''WAIT_FOR_RESULTS'',
	N''WAITFOR'', N''WAITFOR_TASKSHUTDOWN'',
	N''WAIT_XTP_RECOVERY'',
	N''WAIT_XTP_HOST_WAIT'', N''WAIT_XTP_OFFLINE_CKPT_NEW_LOG'',
	N''WAIT_XTP_CKPT_CLOSE'', N''XE_DISPATCHER_JOIN'',
	N''XE_DISPATCHER_WAIT'', N''XE_TIMER_EVENT'')
AND	[waiting_tasks_count] > 0

', 
		@database_name=@dbname,
		@flags=0
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N'Every 15 mins for a week', 
		@enabled=1, 
		@freq_type=4, 
		@freq_interval=1, 
		@freq_subday_type=4, 
		@freq_subday_interval=15, 
		@freq_relative_interval=0, 
		@freq_recurrence_factor=0, 
		@active_start_date=@today, 
		@active_end_date=@nextweek, 
		@active_start_time=100, 
		@active_end_time=235959, 
		@schedule_uid=N'5b0842fe-8f80-44e9-8a09-aac6ce5c2b2e'
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
    IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

GO

My Hekaton template

--hek_template.sql

-- 1) add filegroup (if not already there)

USE [master]
GO
begin try
ALTER DATABASE [DemoDW] ADD FILEGROUP [MemoryOptimizedFG] CONTAINS MEMORY_OPTIMIZED_DATA
end try begin catch end catch

-- 2) add filestream file into filegroup (unless already done)

USE [master]
GO
begin try
ALTER DATABASE [DemoDW] ADD FILE ( NAME = N'DemoDB_hek', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\DemoDB_hek' ) TO FILEGROUP [MemoryOptimizedFG]
end try begin catch end catch

-- 3) remove table if it already exists

USE [DemoDW]
GO
begin try
drop table recentsales
end try begin catch end catch

-- 4) create memory optimized table (hekaton!)

USE [DemoDW]
GO
create table recentsales
(
ItemID int NOT NULL PRIMARY KEY NONCLUSTERED HASH (ItemID) with (BUCKET_COUNT=1024),
[Name] varchar NOT NULL,
Price decimal NOT NULL
)
with (MEMORY_OPTIMIZED = ON,
DURABILITY = SCHEMA_AND_DATA
)

 

PostgreSQL Terms – Cluster/Instance

Is a single, complete, running, PostgreSQL server (IE: cluster of databases)

  • One PostgreSQL Server
  • Listening on one port (may be multiple addresses)
  • One set of data files (including tablespaces)
  • One set of Write Ahead Log

Operations done on a cluster:

  • Initialization (initdb)
  • Start / Stop the cluster
  • File-level Backup / Restores
  • Streaming Replication

Objects defined at Cluster level

  • Users / Roles
  • Tablespaces
  • Databases

PostgreSQL – Administration

Maintenance Tasks

  • Keep autovacuum enabled most of the time
  • VACUUM regularly as well
  • Check for unused indexes

Warnings: (unless you know what you are doing …)

  • Avoid using VACUUM FULL
  • REINDEX CONCURRENT does not exist (yet)
  • Do not use HASH INDEXES
  • Do not use fsync = off

RTFM

  • PostgreSQL docs are about 2000 pages
  • Technically accurate
  • Written and maintained by the developers

Security

  • Superuser is too powerful for most use cases (SECURITY DEFINER functions)
  • Use a distinct userid for replication
  • GRANT minimal access rights

Upgrades

  • Maintenance releases happen about every 3 months
  • For best security – upgrade to latest maintenance release
  • Major release upgrades are harder
  • UDR technology will make Major release upgrades much easier from 9.4+

Extensions

  • PostgreSQL is designed to be extensible
  • Many new features enabled via extensions (EG: pgaudit, postgis)
  • Use them!

Scripts

  • GUIs do not allow you to apply changes in a transaction or easily record your actions
  • Use scripts for any administrative changes (trainers opinion)
  • Test them, before applying

Schema Change: adding a foreign key now split into two parts:-

  1. With write-lock
  2. Background task
  • ALTER TABLE foo ADD FOREIGN KEY … REFERENCES bar NOT VALID;
  • ALTER TABLE foo VALIDATE CONSTRAINT fook;

 

PostgreSQL study notes – psql

Features

  1. non-interactive usage
  2. command history (up/down arrow)
  3. tab completion (sans Windows)
  4. commands terminate with semi-colon and can wrap lines
  5. defaults to supplying currently logged-in username as pg use

Tasks

  1. explore ‘psql’
    a. ‘psql –version’ returns version of postgresql client
    b. ‘psql -l -U postgres’ lists installed db’s then exits.
    postgreSQL installs 3 default db’s
    1. ‘postgres’ – management db. contains user accounts, global settings, etc
    2. ‘template0’ – vanilla read-only db
    3. ‘template1’ – changable copy of template0, used as template for new db’s
    c. ‘psql’ with no options enters interactive mode (duh)
    a hash-mark ending the interactive-prompt denotes a ‘superuser’ eg: ‘postgres=#’
    d. ‘\h’ – returns sql-specific help eg: ‘ALTER TABLE …’
    ‘\h create’ filters above to just ‘CREATE …’ commands
    e. ‘\?’ – returns ‘psql’ specific help eg: ‘\?’ ie: usable metasequences ie shortcuts
    f. ‘\l’ – returns list of DBs. ‘\l+’ additionally returns DB sizes
    g. ‘\du[+]’ – returns list of users with access to postgresql.
    h. ‘\!’ open shell from session (‘exit’ from shell = back to psql)
    i. ‘\! [command] – runs command in shell non-interactively and returns to psql.
    j. ‘\i filename’ – execute command(s) in the file ie psql or sql commands
    k, multiple commands can be run on one line. Separate with space, terminate with semi-colon (eg: ‘\l \du;’)
    L: ‘\c’ – connect to another database or host eg: ‘\c template1’
    m. ‘\d’ list tables\views etc in current DB, ‘\dS’ list system tables, ‘\dS+’ list system tables with sizes.
    n, \q quit
  2. c. ‘psql –help’ (or ‘psql -?’) psql option switches & defaults (short version then long version) eg: -U (username – short version), -l (list – short version), –version (long version) …

 

PostgreSQL study notes – Installation

Download from Enterprise DB, gui is the one to go for, remember xhost on linux

1. Install. For prod you should indicate that data files are stored independently of the source tree. A ‘Cluster’ is not a classic Cluster IE: a Server Cluster, just means all the databases on this particular box.

2. Explore the footprint of the install. ‘\\bin\ contains utilities like psql.exe (terminal monitor). \\data\  contains all databases and 3x config files & pg_log/ (log files), pg_xlog/ (write ahead log folder). postmaster.opts (startup options)

3. Provide access to internal docs via web-browser bookmark eg: file:///C:/Program%20Files/PostgreSQL/9.5/doc/postgresql/html/index.html

4. Add \\bin file to path – for psql.exe etc (eg: C:\Program Files\PostgreSQL\9.5\bin). posqlgresql clients default to submitting the current logged-in users name as the DB user name (eg: “psql” without -U will assum user is windows-user).

5. Add system variable ‘PGUSER=postgres’ workaround so psql etc wont try to login to utilities as o/s-user

 

More thoughts against Triggers

For Rob & Karl – Triggers run outside of transactions. An insert that fires a trigger may be rolled back, but the trigger rolls on.

Triggers introduce a long-term maintenance headache. You can read a stored-procedure from top to bottom and imagine you understand what it does. But unless you examine every tables it touches – you don’t. Little bits of code may be running silently which augment or even reverse some of the logic within the stored-procedure.

Triggers are used by lazy developers to ‘bolt on’ new features to applications, rather than track-down all the code that could insert/update/delete from a table and add the code (or a link to it) there.

This would be forgivable if the application code was closed or propitiatory, but never when the application is open to the application developer, who just cannot be bothered to integrate code changes properly, and cares not-a-jot about long-term maintenance headaches.

(slow breaths, slow breaths)

PostgreSQL study notes – Features

  1. Object Relational Database Management System (ORDBMS) objects (eg: tables) can be related in a Hierarchy: Parent -> Child
  2. Transactional RDBMS: SQL statements have implicit: BEGIN; COMMIT: statements. SQL statements may also have explicit BEGIN COMMIT statements
  3. developed at UC Berkeley like along with bsd-unix
  4. One process per connection: master process = “postmaster” auto-spawns per new connection
  5. Processed (pid’s) use one cpu-core per connection: o/s may spawn new connections on a different cpu-core. no cross-core queries
  6. Multiple helper processes, which appear as ‘postgres’ instances, always running eg: stats collector, background writer (protect agains sudden failure ), auto-vacuum (cleanup/ space reclaimer), wal sender (that’s write ahead log)
  7. max db size: unlimited – limited by available storage.(terabytes, perabytes, exabytes)
  8. max table size: 32tb – stored as multiple 1gb files – changable (could be prob for some o/s’s)
  9. max row size: 400gb
  10. max column size: 1gb (per row ie: per field)
  11. max indexes per table: unlimited
  12. max identifier length (db objects – tables, columns, triggers, functions@ 63 bytes. this is extensible via source code
  13. default listener tcp port 5432. so may install postgresql as non-privilaged user
  14. users are distinct from o/s users
  15. users are authenticated globally (per server), then assigned permissions per database.
  16. inheratance. tables lower in hierachy may inherit columns from heigher tables (ie parents) so long as no contraints eg foreign keys.
  17. case insensative commands – without double quotes (eg: select * from syslog;)
  18. case sensative commands – with double quotes – (eg: select * from “syslog”;)
  19. three primary config files, located in postgres-root A.pg_hba.conf (host based access) B.postgresql.conf general settings C. pg_ident.conf – user mappings
  20. integrated log rotation (config by age or size)

Capitalization

Scripting languages are wonderful things. They use subsets of English and are therefore easy to learn (EG: update, delete, get, put).

However capitalization is totally redundant. A capital letters marks the beginning of a sentence, but scripts do not use sentences.

One alphabet is enough (a to z) who needs another one (A to Z) that is completely equivalent?

Imagine responding to the adhoc query “Please email me a list of most ordered items over Christmas” with “in what font?” lol

Although … one trick that I have used over the years, with having two different ways to express the same data,  is to temporarily change the capitalization of text to double-check for myself that Replication etc is actually working (before the days of tracer-token poo sticks). Changing the data look without changing the data value.

Another sneaky trick is making mass changes to data whilst adding a flag (to only the changed data). For example changing every field containing ‘unsubscribe’ to ‘uNsubscribe’, or ‘yes’to ‘yEs’.

And then repeating with un-flagged fields until only ‘unsuscribe’ or ‘Yep’ remain (lol).

This (typical DBA belt & braces) method almost always guarantees you will not induce any unintended processing errors further down the line, as the data always remains the same length, type and meaning.

 

Senior Consultant Motivations

Autonomy, Mastery, and Purpose.

  1. To prioritize their own time.
  2. To master their profession.
  3. To do important work that matters.

SQL Deadlock graph – arrows

In a SQL deadlock graph the direction of the arrows is an interesting thing.

pic02

With my mechanistic head on, I am imagining it as …

  1. Spid-a requested a lock, and then got a lock (a two-way trip)
  2. Spid-b requested a lock, and then got a lock (arrow ends-up pointing at spid)
  3. Spid-a requested a lock, and is waiting (a one-way thing)
  4. Spid-b requested a lock, and is waiting (arrow pointing away from spid)

Capture Deadlock Graph using Profiler

To Capture a Deadlock Graph using Profiler (NB: with SQL 2008 and above you can also use an extended-event).

  • File / New trace
  • Connection details
  • Use Template / Blank
  • Events Selection / Locks ..1) DeadlockGraph 2) Lock:Deadlock 3) Lock:Deadlock Chain
  • Event Extraction Settings / Save Deadlock XML events seperately / (somefilename)
  • Each deadlock in a distinct file
  • All Deadlocks
  • Run
  • (wait)
  • File / Export / Extract SQL Server Events / Extract deadlock Events / (somefilename2)

Log-shipping Restore error: empty file

I noticed a log-shipping RESTORE job had started failing. Looking back through the job history I found the last two “good” executions contained errors …

*** Error: Could not apply log backup file ‘SomePath\SomeFile.trn’ to secondary database. The volume is empty.

I looked at the path\file specified and found the file was zero size.

I looked on the network-share where the files are backed-up-to \ copied-from, and found the same file was NOT zero size.

I manually copied the file from the network-share to the destination folder (on the DR server), overwriting the empty file.

Log-shipping recovered over the next few hours.

Check every Linked Server

I was unable to cobble together some Powershell code that I could execute within a job-step to check our linked-servers were working.

So I resorted to making the best of the built-in, but flawed, “SP_testlinkedserver” (Its a flawed procedure as if a link fails, it crashes, slowly).

The code below, when ran in a job-step overnight, will dynamically create one job for each linked-server on the box. The job(s) will then run and email the “DBA” operator every linked-server that fails, before deleting themselves.

-- testlinkedservers.sql

-- get list of all linked servers on this box

	CREATE TABLE #temp (
		srv_name varchar(MAX), 
		srv_providername varchar(MAX), 
		srv_product varchar(MAX), 
		srv_datasource varchar(MAX), 
		srv_providerstring varchar(MAX), 
		srv_location varchar(MAX), 
		srv_cat varchar(MAX))
	INSERT INTO #temp EXEC sp_linkedservers
	DELETE FROM #temp WHERE srv_name LIKE 'LOGSHIP%'
	DELETE FROM #temp WHERE srv_name = @@SERVERNAME

-- loop

	DECLARE @name VARCHAR(MAX), @cmd VARCHAR(MAX), @run VARCHAR(MAX)
	WHILE (SELECT COUNT(*) FROM #temp) > 0
	BEGIN
	SELECT TOP 1 @name = srv_name FROM #temp

	-- create the job code

	SET @cmd = 'BEGIN TRANSACTION
	DECLARE @jobId BINARY(16)
	SELECT @jobId = job_id FROM msdb.dbo.sysjobs WHERE (name = N''DBA - LinkedServerTest ' + @name + ''')
	IF (@jobId IS NULL)
	BEGIN
	EXEC msdb.dbo.sp_add_job @job_name=N''DBA - LinkedServerTest ' + @name + ''', 
		@enabled=1, 
		@notify_level_eventlog=2, 
		@notify_level_email=2, 
		@notify_level_netsend=0, 
		@notify_level_page=0, 
		@delete_level=3, 
		@description=N''No description available.'', 
		@category_name=N''[Uncategorized (Local)]'', 
		@owner_login_name=N''sa'', 
		@notify_email_operator_name=N''DBA'', 
		@job_id = @jobId OUTPUT
	END

	-- create the job-step code

	IF NOT EXISTS (SELECT * FROM msdb.dbo.sysjobsteps WHERE job_id = @jobId AND step_id = 1)
	EXEC msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N''one'', 
		@step_id=1, 
		@cmdexec_success_code=0, 
		@on_success_action=1, 
		@on_success_step_id=0,
		@on_fail_action=2, 
		@on_fail_step_id=0, 
		@retry_attempts=0, 
		@retry_interval=0, 
		@os_run_priority=0, @subsystem=N''TSQL'', 
		@command=N''sp_testlinkedserver [' + @name + ']'', 
		@database_name=N''master'', 
		@flags=0;

	-- create instantiation code

	EXEC msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N''(local)''

	COMMIT TRANSACTION'

	-- create the job

	EXEC(@cmd)

	-- run the job

	SET @run = 'EXECUTE msdb.dbo.sp_start_job ''DBA - LinkedServerTest ' + @name + ''''
	EXEC(@run)

        -- move to next row in loop

	DELETE FROM #temp WHERE srv_name = @name
	END

.

Log-Shipping Monitor incorrect after outage

After a virtualization issue caused an unscheduled rebooted of production, I found the DR (log-shipping) monitor incorrectly reporting issues.

It seems the linked-server was no longer working, as @@servername returned NULL on Prod.

On Production SP_AddServer failed as the servername was in sys.servers – but not with server_id 0 (as needed for @@servername).

Removing the incorrect entry with SP_DropServer failed as there were remote- connections using it. And SP_DropRemoteLogin failed as there was not a remote users called NULL.

The fix was to remove log-shipping first using the GUI, which was only partially successful. Then manually, by deleting jobs from prod and DR, and truncating system-tables in MSDB starting log_shipping~ (on both servers).

Once log-shipping was cleaned off both machines I could use … EXEC SP_DropServer ‘ProdServer’, ‘droplogins’ followed by EXEC SP_AddServer ‘ProdServer’, LOCAL successfully. Now the server-name was correctly at the top of sys.servers the only task left was to schedule a reboot so Select @@ServerName would pick-up the new value.

After which I could re-configure log-shipping.

Why do Diff Restores fail

Diff restores have to happen directly after a full restore, and will fail if a full backup was taken between the two being restored.

Move Databases

I wanted to move about 50 databases on a Sharepoint server off the C-drive
(yeh I know).

Sadly the only place I could move both datafiles and logfiles to was the D-Drive
(yup).

Here’s the code I wrote to help me …

--move_db.sql

-- to move a user-database to different-drives on the same-server

USE [master]
GO

-- backup

	DECLARE @dbname VARCHAR(max) = 'SomeDatabaseName' -- database name

	DECLARE @backup_cmd VARCHAR(MAX) = 'BACKUP DATABASE ['+ @dbname + ']
	TO DISK = N''\\SomeNetworkShare\SomeServerName_' + @dbname + '.bak''
	WITH INIT, COMPRESSION, STATS = 1;'

	SELECT (@backup_cmd)
	--EXEC (@backup_cmd)


-- kill connections

	DECLARE @kill_cmd VARCHAR(MAX) = 'DECLARE @kill varchar(8000) = '''';
	SELECT @kill=@kill+''kill ''+convert(varchar(5),spid)+'';'' from master..sysprocesses 
	WHERE dbid=db_id(''' + @dbname + ''') and spid>50;
	EXEC (@kill);'

	SELECT (@kill_cmd)
	--EXEC (@kill_cmd)

-- restore

	DECLARE @restore_cmd VARCHAR(MAX) = 'RESTORE DATABASE [' + @dbname + ']
	FROM DISK = N''\\SomeNetworkShare\SomeServerName_' + @dbname + '.bak''
	WITH FILE = 1,  
	MOVE N''' + @dbname + ''' TO N''D:\SQL_Data\' + @dbname + '.mdf'',
	MOVE N''' + @dbname + '_log'' TO N''D:\SQL_Log\' + @dbname + '_log.ldf'',
	REPLACE,  STATS = 1;'

	SELECT (@restore_cmd)
	--EXEC (@restore_cmd)

Estimate backup size

To get a quick estimate of the full backup size of our user databases – for planning – I ran this across production and pasted ‘database_name’ and ‘reserved’ into a spreadsheet.

EXECUTE master.sys.sp_MSforeachdb 'USE [?]; if db_id()>4 EXEC sp_spaceused'

sp_whoisactive

My favorite configuration of sp_WhoIsActive is …

EXEC [master].[dbo].[sp_WhoIsActive] @get_plans=1, @get_additional_info = 1
--, @get_task_info = 2
--, @sort_order = '[cpu] desc'
--, @filter_type = 'session', @filter = '2018'
--, @filter_type = 'login', @filter = 'windowslogin'
, @output_column_list = '[dd%][session_id][block%][sql_text][sql_command][login_name][CPU%][wait_info]
[tasks][tran_log%][database%][percent%][Program%][host%][reads%][writes%][query_plan][locks][%]'

Who deleted that data?

Sadly I could not find out.

Going forward – To capture deletes on a table I set-up a regular ‘after delete’ trigger with some extra columns to hold system functions.

This allowed me to capture the date/time, PC-Name and login that originated deletes. Here is my working lab …

--lab_trigger_deletes.sql

--create table to be monitored and add some data
	CREATE TABLE t1 (c1 INT, c2 int)
	INSERT INTO t1 VALUES (1,7), (2,8), (3,9)

-- create audit table
	CREATE TABLE t1_audit (c1 INT, c2 INT, c3 DATETIME, c4 SYSNAME, c5 SYSNAME, c6 SYSNAME)

-- check contents of both tables
	SELECT * from t1
	SELECT * FROM t1_audit

-- create trigger
	CREATE TRIGGER trg_ItemDelete 
	ON dbo.t1 
	AFTER DELETE 
	AS
	INSERT INTO dbo.t1_audit(c1, c2, c3, c4, c5, c6)
			SELECT d.c1, d.c2, GETDATE(), HOST_NAME(), SUSER_SNAME(), ORIGINAL_LOGIN()
			FROM Deleted d

-- delete a row (firing the trigger)
	DELETE FROM t1 WHERE c1 = 2

-- check contents of both tables again
	SELECT * from t1
	SELECT * FROM t1_audit

-- tidy up
	IF OBJECT_ID ('trg_ItemDelete', 'TR') IS NOT NULL DROP TRIGGER trg_ItemDelete;
   	drop TABLE t1
	drop TABLE t1_audit

Two SSDT’s

Microsoft seems to have two products called SSDT (SQL Server Data Tools)

1. A development environment for SSIS etc.

2. A source control tool that uses TFS.

To install 1. You choose SSDT as an option whilst installing SQL Server 2008r2 or 2012.

To install 2. It is a separate download for Visual Studio 10 and 12, or a post installation option for Visual Studio 13.

(SSDT is really just a badly integrated set of tools. I find treating it as two distinct products keeps me sane whilst googling)

Log Space

I noticed a jump in logfile size the other day and was wondering how to predict a autogrowth event.

I know old data is truncated after a log-backup but that’s internal and not normally visable.

I came up with this to run across production …

--LogSpace.sql
-- To help find near-full logfiles that may autogrow soon.

-- create table to hold raw data
CREATE TABLE #temp (DBName varchar(100), SizeMB int, UsedPct float, [STATUS] bit)

-- populate table
INSERT #temp EXEC('DBCC SQLPERF(logspace)')

-- output
SELECT DBName, SizeMB, UsedPct FROM #temp --WHERE UsedPct > 90 -- 90% full

-- clean-up
DROP TABLE #temp

Orphaned Users

Here’s a quick script to fix orphaned users after a migration …

--OrphanedUsers.sql

-- create temp table
	CREATE TABLE #orphans (oname VARCHAR(100), oSID VARCHAR(100) PRIMARY KEY)
	DECLARE @cmd VARCHAR(MAX), @name VARCHAR(100)

-- populate temp table with orphaned logins
	INSERT #orphans(oname,osid)
	EXEC sp_change_users_login @Action='Report';

-- loop to fix / or else create login with default pw
	WHILE (SELECT COUNT(*) FROM #orphans) > 0
	BEGIN
		SELECT TOP 1 @name = oname FROM #orphans 
		SET @cmd = 'EXEC sp_change_users_login ''Auto_Fix'', ''' + @name + ''', NULL, ''B3r12-3x$098f6'';'
		DELETE FROM #orphans WHERE oname = @name
		EXEC (@cmd)
	END

-- tidy up
	DROP TABLE #orphans 

Making index changes to Production

These days I use a SQL Job called ‘DBA – index maint’.

Whenever I have an index change to make I paste the script into a new step, name that step with today’s date, change the ‘start step’ to that step, and schedule it to run once overnight.

This gives me a history and outcome, along-side the exact action.

SQL Safe error “Cannot connect to SQL Server instance”

This was fixed by re-installing SQL Safe. Bonus – Here is a working restore command with move

EXEC [master].[dbo].[xp_ss_restore] 
	@database = 'SomeDatabase',
	@filename = 'J:\backups\SomeDatabase.BAK', 
	@backuptype = 'Full',
	@withmove = 'SomeDatabase_data "J:\sql_data\SomeDatabase_data.mdf"',
	@withmove = 'SomeDatabase_log "J:\sql_log\SomeDatabase_log.ldf"',
	@recoverymode = 'recovery',
	@replace = '1';

Compress all tables

As part of my management of our MDW I wrote this to help compress the user-table in the database.

SELECT 'ALTER TABLE ' + s.name + '.' + t.name + ' REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE)' 
FROM sys.tables AS t
INNER JOIN sys.schemas AS s
ON t.[schema_id] = s.[schema_id]

Diff Backup / Restore (part 3)

On my Reporting Server I created the SQL Job ‘DBA – Restore MyDatabase’ with two steps 1) Restore MyDatabase, and 2) Add Users.

Step 2 just de-orphaned some user accounts EG:-

EXEC sp_change_users_login 'Auto_Fix', 'uidServerLink';

Step 1 contained this code …

-- Restore

	EXECUTE [master].[dbo].[DatabaseRestore] 
		@dbName = 'MyDatabase',
		@SourceServer = 'MySourceServer',
		@backupPath = 'M:\Backups'


-- Change recovery model

	ALTER DATABASE MyDatabase set recovery SIMPLE

https://richardbriansmith.wordpress.com/2015/10/02/diff-backup-restore-part-1/

https://richardbriansmith.wordpress.com/2015/10/02/diff-backup-restore-part-2/

Diff Backup / Restore (part 2)

On the Production Server I created the Job ‘DBA – Backup/Restore MyDatabase’ with two steps 1) Backup MyDatabase, and 2) Restore MyDatabase.

Step 2 just started the Restore job on the Reporting server (detailed in “Part 3”)

Step 1 needed to check that no random backups had happened in the last 24 hours before starting a Diff Backup …

-- If its Friday or the LSN's do not match - do a FULL backup

	DECLARE @DBName VARCHAR(100) = 'MyDatabase' --<< Database Name
	IF (SELECT DATEPART(dw, GETDATE())) = 6	--<< = Friday
	OR (SELECT MAX(differential_base_lsn)
	    FROM [MyProdServer].[master].[sys].[master_files]
	    WHERE [name] LIKE '%' + @DBName + '%')
	    !=
	   (SELECT MAX(differential_base_lsn)
	    FROM [MyReportServer].[master].[sys].[master_files]
	    WHERE [name] LIKE '%' + @DBName + '%')

	BEGIN
		SELECT 'We can only do a FULL backup'
		EXECUTE [master].[dbo].[DatabaseBackup] 
			@Databases = @DBName,
			@Directory = N'\\MyReportServer\backups', 
			@BackupType = 'FULL', 
			@CleanupTime = 1, --<< ONE HOUR
			@CleanupMode = 'BEFORE_BACKUP',
			@Compress = 'Y',
			@CheckSum = 'Y',
			@LogToTable = 'Y'
	END

-- Else do a DIFF backup

	ELSE
	BEGIN
		SELECT 'we can do a diff backup' 
		EXECUTE [master].[dbo].[DatabaseBackup] 
			@Databases = @DBName,
			@Directory = N'\\MyReportServer\backups', 
			@BackupType = 'DIFF',
			@CleanupTime = 168, --<< ONE WEEK
			@CleanupMode = 'BEFORE_BACKUP',
			@Compress = 'Y',
			@CheckSum = 'Y',
			@LogToTable = 'Y'
	END

https://richardbriansmith.wordpress.com/2015/10/02/diff-backup-restore-part-3/

https://richardbriansmith.wordpress.com/2015/10/02/diff-backup-restore-part-1/

Diff Backup / Restore (part 1)

Although Ola’s Backup solution works great …
https://ola.hallengren.com/sql-server-backup.html

For this project I needed a corresponding Restore procedure, so I could setup nightly Diff Backup / Restores (from Prod to Reporting). Without having to write too much code🙂

I modified the code from here for our environment.
http://jason-carter.net/professional/restore-script-from-backup-directory-modified.html

In my next posts I will detail the SQL Jobs for this project.

USE [master]
GO

CREATE PROCEDURE [dbo].[DatabaseRestore]
		@dbName sysname,
		@SourceServer NVARCHAR(500),
		@backupPath NVARCHAR(500)

AS

/* To restore backups created from ola.hallengren's backup solution (RS) */

	SET NOCOUNT ON
	DECLARE @cmd NVARCHAR(500),
			@lastFullBackup NVARCHAR(500),
			@lastDiffBackup NVARCHAR(500),
			@backupFile NVARCHAR(500)

	DECLARE @fileList TABLE (backupFile NVARCHAR(255))
	DECLARE @directoryList TABLE (backupFile NVARCHAR(255))

/* Kill any connections */

	DECLARE @kill VARCHAR(8000) = '';
	SELECT  @kill = @kill + 'kill ' + CONVERT(VARCHAR(5), spid) + ';'
		FROM [master].[dbo].[sysprocesses]
		WHERE dbid = DB_ID(@dbName)
		AND spid > 50;
	EXEC (@kill);

/* Match that of Olas output */

	SET @backupPath = @backupPath + '\' + @SourceServer + '\' + @dbName + '\'

/* Get List of Files */

	SET @cmd = 'DIR /s /b /O D ' + @backupPath
	IF (SELECT value_in_use
		FROM sys.configurations
		WHERE name = 'xp_cmdshell') = 0
	BEGIN /* cmd shell is disabled */
		EXEC sp_configure 'show advanced options', 1 RECONFIGURE
		EXEC sp_configure xp_cmdshell, 1 RECONFIGURE
		INSERT INTO @fileList(backupFile) EXEC master.sys.xp_cmdshell @cmd
		EXEC sp_configure 'xp_cmdshell', 0 RECONFIGURE
		EXEC sp_configure 'show advanced options', 0 RECONFIGURE
	END
	ELSE /* cmd shell is enabled */
		INSERT INTO @fileList(backupFile) EXEC master.sys.xp_cmdshell @cmd

/* Find latest full backup */

	SELECT @lastFullBackup = MAX(backupFile) 
	FROM @fileList 
	WHERE backupFile LIKE '%' + @SourceServer + '_' + @dbName + '_FULL_%.bak'

	SET @cmd = 'RESTORE DATABASE [' + @dbName + '] FROM DISK = ''' 
	   + @lastFullBackup + ''' WITH NORECOVERY, REPLACE'
	SELECT (@cmd); EXEC (@cmd)

/* Find latest diff backup */

	SELECT @lastDiffBackup = MAX(backupFile)
	FROM @fileList 
	WHERE backupFile  LIKE '%' + @SourceServer + '_' + @dbName + '_DIFF_%.bak'
	AND RIGHT(backupfile, 19) > RIGHT(@lastFullBackup, 19)

/* check to make sure there is a diff backup */

	IF @lastDiffBackup IS NOT NULL
		BEGIN
		SET @cmd = 'RESTORE DATABASE [' + @dbName + '] FROM DISK = ''' 
			   + @lastDiffBackup + ''' WITH NORECOVERY'
		SELECT (@cmd); EXEC (@cmd)
		SET @lastFullBackup = @lastDiffBackup
		END

--/* check for log backups */

--	DECLARE backupFiles CURSOR FOR 
--	SELECT backupFile 
--	FROM @fileList
--	WHERE backupFile LIKE  '%' + @SourceServer + '_' + @dbName + '_LOG_%.trn'
--	AND RIGHT(backupfile, 19) > RIGHT(@lastFullBackup, 19)

--	OPEN backupFiles 

--/* Loop through all the files for the database */

--	FETCH NEXT FROM backupFiles INTO @backupFile 

--	WHILE @@FETCH_STATUS = 0 
--		BEGIN 
--		   SET @cmd = 'RESTORE LOG [' + @dbName + '] FROM DISK = ''' 
--			   + @backupFile + ''' WITH NORECOVERY'
--		   SELECT (@cmd); EXEC (@cmd)
--		   FETCH NEXT FROM backupFiles INTO @backupFile 
--		END

--	CLOSE backupFiles 
--	DEALLOCATE backupFiles 

/* put database in a useable state */

	SET @cmd = 'RESTORE DATABASE [' + @dbName + '] WITH RECOVERY'
	SELECT (@cmd); EXEC (@cmd)

GO

https://richardbriansmith.wordpress.com/2015/10/02/diff-backup-restore-part-2/

https://richardbriansmith.wordpress.com/2015/10/02/diff-backup-restore-part-3/

sysutility_get_views_data_into_cache_tables

Having uninstalled UCP and reinstalled MDW I found the MDW job “sysutility_get_views_data_into_cache_tables” failing at step-3 with an error message about invalid synonyms.

The fix was to re-create the MSDB sysnonyms from a clean SQL 2012 server.

Namely …

CREATE SYNONYM [dbo].[syn_sysutility_ucp_databases] 
FOR [msdb].[dbo].[sysutility_ucp_databases_stub]

CREATE SYNONYM [dbo].[syn_sysutility_ucp_filegroups] 
FOR [msdb].[dbo].[sysutility_ucp_filegroups_stub]

CREATE SYNONYM [dbo].[syn_sysutility_ucp_dacs] 
FOR [msdb].[dbo].[sysutility_ucp_dacs_stub]

CREATE SYNONYM [dbo].[syn_sysutility_ucp_smo_servers] 
FOR [msdb].[dbo].[sysutility_ucp_smo_servers_stub]

CREATE SYNONYM [dbo].[syn_sysutility_ucp_volumes] 
FOR [msdb].[dbo].[sysutility_ucp_volumes_stub]

CREATE SYNONYM [dbo].[syn_sysutility_ucp_computers] 
FOR [msdb].[dbo].[sysutility_ucp_computers_stub]

CREATE SYNONYM [dbo].[syn_sysutility_ucp_logfiles] 
FOR [msdb].[dbo].[sysutility_ucp_logfiles_stub]

CREATE SYNONYM [dbo].[syn_sysutility_ucp_datafiles] 
FOR [msdb].[dbo].[sysutility_ucp_datafiles_stub]

CREATE SYNONYM [dbo].[syn_sysutility_ucp_space_utilization] 
FOR [msdb].[dbo].[sysutility_ucp_space_utilization_stub]

CREATE SYNONYM [dbo].[syn_sysutility_ucp_cpu_utilization] 
FOR [msdb].[dbo].[sysutility_ucp_cpu_utilization_stub]

DBA Rule #8

Avoid 3rd Party management software. Backup software doubly so.

SSMS on Windows Server 2012

To start SSMS on Windows Server 2012 R2 Standard I …

1) Clicked in the bottom-left corner of the screen – which brought up a bunch of blue and green boxes.

2) Clicked on the little down-arrow in a circle near the bottom-left of the screen.

2b) If not visible I clicked a blank part of the screen – outside the boxes.

3) This is like an ‘All Programs’ screen.

4) Find and click “SQL Server Management Studio”.

DBA Rule #7

Get good quickly

Better than training courses, reading, or a junior role. The fastest way to become great at your job is to do it badly.

Your future self will thank you for fearlessly diving in (so long as your current self can remain employed :))

DBA Rule #6

You can serve only one God.

Whatever you fall asleep thinking about, your sub-consious will continue to work on over-night.

If your last thoughts are about impressing your boss, your technical work the next day will be pedestrian.

The two groups I see perpetually falling foul of this rule are Manager\Dba’s and Developer\Dba’s

DBA Rule #4

Simplicity is the ultimate complexity.

To get to a simple solution, keep working once you have a complex solution.

DBA Rule #5

Wisdom it more important than knowledge.

For example. It takes at least ten years to progress from Junior DBA to Data Architect.

Not because it takes that long to amass the skills neccessary to do the job. But the experience to be successful at it.

DBA Rule #3

Make changes singley.

It may be tempting to sort out that long standing problem during a migration, but the chances of the whole thing failing are increased exponentially.

Mitigate risk and aid troubleshooting by making changes one at a time.

DBA Rule #2

Try to avoid Scripting

A well crafted SQL statement is a thing on beauty, and writing one gives a great deal of satisfaction. But creating art is not the job.

If there is no other choice but using a script, avoid wasting time honing, refining, and later augmenting your own.

Download industrial-strength scripts from a trusted source. But read and understand them before using in production.

DBA Rule #1

These are the (risk mitigating) rules I have worked-out for myself over the years as a DBA …

Delete nothing

If you have to, make a copy first. And never automate deletes.

Slave SQL Jobs

To run a SQL-Job on the same Server use …

EXECUTE msdb.dbo.sp_start_job 'JobName'

To run a SQL-Job on another Server use …

EXECUTE [ServerName].msdb.dbo.sp_start_job 'JobName'

(assuming LinkedServers etc are already set-up)

** Update ** I had an issue where the remote job-name could not be found. The cause (I saw in sys.servers) was that I had used the wrong version of SSMS to create the link. The fix was to amend a working scripted-out link.

Differential backup / restore using SQL Safe

I wanted to improve a backup / restore sql-job that populated a Reporting Server every night. I felt it would be quicker if it did a weekly Full backup and daily Diff backups.

The 3rd-party backup software was “SQL Safe Backup” from Idera.

I used this code on the Production Server …

--BackupDBA.sql

-- FULL Backup if Friday

	IF (SELECT DATEPART(dw, GETDATE())) = 6
	BEGIN
		EXEC [master].[dbo].[xp_ss_backup]
		@database = 'DBA',
		@filename = '\\ReportingServer\m$\DBA_Full.safe',
		@backuptype = 'Full',
		@overwrite = 1,
		@verify = 1;
	END;


-- DIFF Backup

	EXEC [master].[dbo].[xp_ss_backup]
	@database = 'DBA',
	@filename = '\\ReportingServer\m$\DBA_Diff.safe',
	@backuptype = 'Diff',
	@overwrite = 1,
	@verify = 1;

… and this code I scheduled a few hours later on the Reporting Server …

--RestoreDBA.sql

-- First, kill any connections

	DECLARE @kill VARCHAR(8000) = '';
	SELECT  @kill = @kill + 'kill ' + CONVERT(VARCHAR(5), spid) + ';'
	FROM    [master].[dbo].[sysprocesses]
	WHERE   dbid = DB_ID('DBA')
	AND spid > 50;
	EXEC (@kill);


-- Restore FULL

	EXEC [master].[dbo].[xp_ss_restore] 
	@database = 'DBA',
	@filename = 'M:\DBA_Full.safe', 
	@backuptype = 'Full',
	@recoverymode = 'norecovery', 
	@replace = '1';


-- Restore DIFF

	EXEC [master].[dbo].[xp_ss_restore] 
	@database = 'DBA',
	@filename = 'M:\DBA_Diff.safe', 
	@backuptype = 'Diff',
	@recoverymode = 'recovery', 
	@replace = '1';

Finally, I added a step on Reporting to repair the orphaned user-accounts …

USE [DBA]
GO

EXEC sp_change_users_login 'Auto_Fix', 'ReportingLogin';
EXEC sp_change_users_login 'Auto_Fix', 'LinkedServerLogin';
GO

Recovery Pending

After exhausting all the usual methods to get a database out of ‘Recovery Pending’ mode, I deleted it then reattached the mdf file.

(NB: I made a copy of mdf file first, but did not need it).

Reset File size and Autogrowth settings

This is the logical conclusion of my reset_tempdb and reset_model scripts. It show all of the file sizes and autogrowth settings in the current instance and the code to change them.

The suggested sizes (128 MB for Logfiles and 256 MB for Datafiles) are reasonable for Model, but should probably be amended for other databases dependent on current size and autogrowth history.

--autogrowth_all.sql

-- get current settings & create commands to change them
select	db.Name, case mf.[Type] when 0 then 'DATA' else 'LOG' end [FileType],
	convert(varchar(50), size*8/1024) + ' MB' [CurrentSize], 
	case mf.is_percent_growth 
		when 1 then convert(varchar(50), growth) + ' %' 
		when 0 then convert(varchar(50), growth*8/1024) + ' MB' end [AutoGrowth],
	'ALTER DATABASE [' + db.Name + '] MODIFY FILE (NAME = N''' + mf.Name + ''', SIZE = ' +        case mf.[type] when 0 then '256' else '128' end + 'MB);' [ReSizeCommand],
	'ALTER DATABASE [' + db.Name + '] MODIFY FILE (NAME = N''' + mf.Name + ''', FILEGROWTH = ' +  case mf.[type] when 0 then '256' else '128' end + 'MB);' [AutogrowthCommand]
from [master].[sys].[master_files] mf
join [master].[sys].[databases] db 
on mf.database_id = db.database_id
order by mf.database_id, mf.[type];

Careers Advice

I feel ‘doing’ is the best prep. EG: The more I watch Brent tune queries the fatter my a** gets🙂

Cloud storage?

Cloud storage??

Hardware-free disaster-recovery (yesss).

PostgreSQL command-line

Having logged in locally on a linux box, I used these steps to access the database via a terminal session …

 $ sudo su - postgres
 [sudo] password for richard: *****
 $ psql
 Password: ******
 postgres=#
 postgres=# select version()
 PostgreSQL 9.4.4 on x86_64 (Red Hat 4.1.2-55), 64-bit

Line 1) As root, I switch to linux user “postgres” (including environmental variables)
Line 2) I typed in my password
Line 3) And ran the Executable (psql.exe)
Line 4) I typed in the password of the postgres user
Line 5) Success! and to prove it …
Line 6) My fist SELECT statement, lol

BTW: to leave I typed “\q” to quit the PostgreSQL environment, “exit” to leave the postgres account, then “exit” again to close the terminal session.

Importing MID/MIF files into SQL Server 2012

What a true pain that was! “shonky” (thanks Rory) There just is no proper documentation for Spatial data on MSSQL Server yet. (Oracle is so much simpler for GIS).

Firstly – to get OGR2OGR installed I downloaded the OSGeo4W setup utility which fails with “Unable to get setup.ini from http: //download.osgeo.org/osgeo4w/”.

Eventually I found the trick was to choose “Advanced Install” from the first screen, then “Use IE5 Settings”.

Then – after the local installation had completed – to get it installed on a server without internet connections I copied the entire 2GB folder c:\OSGeo4W\ across the network.

I needed to create a system-variable called “GDAL_DATA” to the path of ‘coordinate_axis.csv’ being “D:\OSGeo4W\share\gdal\”

Lastly, I created a short-cut on my desk-top to cmd.com and customized it so it starts in the ogr2ogr.exe folder.

To import using ogr2ogr.exe … I played with a number of commands in a BAT file until this one worked (all on one line) …

D:\OSGeo4W\bin\ogr2ogr 
-f "MSSQLSpatial" 
"MSSQL:Server=localhost;Database=GIS;trusted_connection=yes" 
"D:\Media\Spatial Data\BFRS_10_MILE.mid" 
-t_srs "EPSG:27700" 
-lco "GEOM_TYPE=geometry" 
-lco "GEOM_NAME=geog27700"

To ‘decode’ that as much as I can …

– Line-1 is the path to ogr2ogr.exe
– Line-2 means import to SQL Server
– Line-3 is the SQL Server connection string (SQL Server 2012 btw)
– Line-4 is the path to the MID file (the MIF file will be imported too)
– Line-5 converts it to ‘Geometry’ projected on the UK template

Emergency Friday afternoon Backups

I had a situation where a VMWare upgrade stopped the backup software from working across the board (mmm, a little notice would have been nice).

To avoid the possibility of a full log crashing an application over the weekend I created two Jobs to do full backups every 10pm and transaction backups every 3 hours and deployed them.

The script (and scripted-out job) had to be compatible with all versions of SQL Server. The only prep I had to do was creating a folder for each server (_instance) at the backup destination.

The code I downloaded from http://www.mssqltips.com/sqlservertip/1070/simple-script-to-backup-all-sql-server-databases/ evolved into this …

--backup_all.sql

	DECLARE @name VARCHAR(50) -- database name  
	DECLARE @path VARCHAR(256) -- path for backup files  
	DECLARE @fileName VARCHAR(256) -- filename for backup 
	DECLARE @fileDate VARCHAR(20) -- added to file name
	DECLARE @sname varchar(100) -- server name
	DECLARE @DeleteDate DATETIME -- purge date ..
	SET @DeleteDate = getdate()-14 -- .. two weeks
	
-- get server name
	select @sname = replace(@@servername, '\', '_')
 
-- specify backup directory
	SET @path = '\\SomeServerName\BACKUPS\' + @sname + '\'

-- specify filename format
	SELECT @fileDate = CONVERT(VARCHAR(20),GETDATE(),112)
 
-- setup cursor
	DECLARE db_cursor CURSOR FOR  
	SELECT name FROM master.dbo.sysdatabases
	WHERE name NOT IN ('model', 'tempdb')  -- exclude these databases
 	OPEN db_cursor
	FETCH NEXT FROM db_cursor INTO @name   
 
 --loop through databases backing them up
	WHILE @@FETCH_STATUS = 0   
	BEGIN   
		SET @fileName = @path + @name + '_' + @fileDate + '.BAK'  
		BACKUP DATABASE @name TO DISK = @fileName  
		FETCH NEXT FROM db_cursor INTO @name
	END
	
-- close cursor
	CLOSE db_cursor   
	DEALLOCATE db_cursor   

-- purge old backups (but manually delete SQL2K)
	if @@version not like '%2000%' 
	exec master.sys.xp_delete_file 0, @path, 'BAK', @DeleteDate, 0

… and this for the log backups …

--backup_all_t.sql

	DECLARE @name VARCHAR(50) -- database name   
	DECLARE @path VARCHAR(256) -- path for backup files   
	DECLARE @fileName VARCHAR(256) -- filename for backup   
	DECLARE @fileDate VARCHAR(20) -- used for file name
	DECLARE @sname varchar(100) -- server name
	DECLARE @DeleteDate DATETIME -- purge date ..
	SET @DeleteDate = getdate()-7 -- .. one weeks  

-- get server name
	select @sname = replace(@@servername, '\', '_')

-- specify backup directory
	SET @path = '\\SomeServerName\BACKUPS\' + @sname + '\' 

-- specify filename format
	SELECT @fileDate = CONVERT(VARCHAR(20),GETDATE(),112)  
	   + '_' + REPLACE(CONVERT(VARCHAR(20),GETDATE(),108),':','') 

-- setup cursor
	DECLARE db_cursor CURSOR FOR   
	SELECT name FROM master.dbo.sysdatabases  
	WHERE name NOT IN ('master','model','msdb','tempdb')  -- exclude these databases
	AND DATABASEPROPERTYEX(name, 'Recovery') NOT IN ('SIMPLE') -- exclude Simple dbs
	OPEN db_cursor    
	FETCH NEXT FROM db_cursor INTO @name    

 --loop through databases, backing them up 
	WHILE @@FETCH_STATUS = 0    
	BEGIN    
		SET @fileName = @path + @name + '_' + @fileDate + '.TRN'   
		BACKUP LOG @name TO DISK = @fileName  
		FETCH NEXT FROM db_cursor INTO @name    
	END
	
-- close cursor
	CLOSE db_cursor    
	DEALLOCATE db_cursor    

-- purge old backups (but manually delete SQL2K)
	if @@version not like '%2000%' 
	exec master.sys.xp_delete_file 0, @path, 'TRN', @DeleteDate, 0

I heavily commented it as I was near the end of my contract, and knew “temporary solutions” can persist for a long time😉.