About Me

My photo
I am MCSE in Data Management and Analytics with specialization in MS SQL Server and MCP in Azure. I have over 13+ years of experience in IT industry with expertise in data management, Azure Cloud, Data-Canter Migration, Infrastructure Architecture planning and Virtualization and automation. Contact me if you are looking for any sort of guidance in getting your Infrastructure provisioning automated through Terraform. I sometime write for a place to store my own experiences for future search and read by own blog but can hopefully help others along the way. Thanks.

Detect head blocker and how to retrieve job step name or job name for a SQL server Agent job.

What will you do if you get blocking sessions:-

using below script  check head blocker
---------------------------------------------------

SET NOCOUNT ON
SET CONCAT_NULL_YIELDS_NULL OFF
GO
SELECT SPID, BLOCKED, REPLACE (REPLACE (T.TEXT, CHAR(10), ' '), CHAR (13), ' ' ) AS BATCH
INTO #T
FROM SYS.SYSPROCESSES R CROSS APPLY SYS.DM_EXEC_SQL_TEXT(R.SQL_HANDLE) T
GO
WITH BLOCKERS (SPID, BLOCKED, LEVEL, BATCH)
AS
(
SELECT SPID,
BLOCKED,
CAST (REPLICATE ('0', 4-LEN (CAST (SPID AS VARCHAR))) + CAST (SPID AS VARCHAR) AS VARCHAR (1000)) AS LEVEL,
BATCH FROM #T R
WHERE (BLOCKED = 0 OR BLOCKED = SPID)
AND EXISTS (SELECT * FROM #T R2 WHERE R2.BLOCKED = R.SPID AND R2.BLOCKED <> R2.SPID)
UNION ALL
SELECT R.SPID,
R.BLOCKED,
CAST (BLOCKERS.LEVEL + RIGHT (CAST ((1000 + R.SPID) AS VARCHAR (100)), 4) AS VARCHAR (1000)) AS LEVEL,
R.BATCH FROM #T AS R
INNER JOIN BLOCKERS ON R.BLOCKED = BLOCKERS.SPID WHERE R.BLOCKED > 0 AND R.BLOCKED <> R.SPID
)
SELECT N' ' + REPLICATE (N'| ', LEN (LEVEL)/4 - 2) + CASE WHEN (LEN (LEVEL)/4 - 1) = 0 THEN 'HEAD - ' ELSE '|------ ' END + CAST (SPID AS NVARCHAR (10)) + ' ' + BATCH AS BLOCKING_TREE FROM BLOCKERS ORDER BY LEVEL ASC
GO
DROP TABLE #T
GO
---------------------------------


Get the Head blocker session id from above query..

Then execute

select program_name from sys.sysprocesses where spid = <sessionid>

ex:-
select program_name from sys.sysprocesses where spid =  70

if program_name is like -- SQLAgent - TSQL JobStep (Job 0x8F2B94F4E17DFC44AE94F157CE24C324 : Step 1)


Then execute below command

select * from msdb.dbo.sysjobsteps where job_id = 0x8F2B94F4E17DFC44AE94F157CE24C324
-- step_name =<It will show step_name of the job>

get the job_id from above command and execute below command to get job_name.

select * from msdb.dbo.sysjobs where job_id = 'F4942B8F-7DE1-44FC-AE94-F157CE24C324' -- Reorganize indexes in the XXX table.


Check the job, what it is doing.
Take approval from your customer and take appropriate action.
if customer says to cancel / kill the job, kill the job.

and then execute above head blocker script.

or run

select program_name from sys.sysprocesses where spid =  70 -- it  will return no row(s) found.

in this way we identify head blocker for a session id and find other details related to sql server agent job.

Thanks for reading.







Rebuild or Reorganize and statistics update of all indexes of a table.

Customer comes and say they want to rebuild\reorganize  all indexes of  a table.

1. We will get the table name from customer
2.we will use  sys.dm_db_index_physical_stats

-- Get the avg_fragmentation_in_percent report for a table & its corresponding index.

USE master;
GO
-- In this example, OBJECT_ID is evaluated in the context of the master database. 
-- Because Person.Address does not exist in master, the function returns NULL.
-- When NULL is specified as an object_id, all objects in the database are returned.
-- The same results are returned when an object that is not valid is specified.

SELECT * FROM sys.dm_db_index_physical_stats
    (DB_ID(N'AdventureWorks'), OBJECT_ID(N'Person.Address'), NULL, NULL , 'DETAILED');
GO
-- This example demonstrates the results of specifying a valid object name
-- that exists in both the current database context and
-- in the database specified in the database_id parameter of the 
-- sys.dm_db_index_physical_stats function.
-- An error is returned because the ID value returned by OBJECT_ID does not
-- match the ID value of the object in the specified database.

CREATE DATABASE Test;
GO
USE Test;
GO
CREATE SCHEMA Person;
GO
CREATE Table Person.Address(c1 int);
GO
USE AdventureWorks2012;
GO
SELECT * FROM sys.dm_db_index_physical_stats
    (DB_ID(N'Test'), OBJECT_ID(N'Person.Address'), NULL, NULL , 'DETAILED');
GO


Generate all index name from above query and tried to convince that index reorganization will be good.

Then

USE [<DBNAME>]
GO
ALTER INDEX [<Index Name>] ON [dbo].[<Table Name>] REORGANIZE


Then update statistics of that table using below command

 SELECT
OBJECT_NAME([sp].[object_id]) AS "Table",
[sp].[stats_id] AS "Statistic ID",
[s].[name] AS "Statistic",
[sp].[last_updated] AS "Last Updated",
[sp].[rows],
[sp].[rows_sampled],
[sp].[unfiltered_rows],
[sp].[modification_counter] AS "Modifications",
'UPDATE Statistics ' + OBJECT_NAME([sp].[object_id]) + ' ( ' + [s].[name] + ')' + ' with SAMPLE 30 percent'

FROM [sys].[stats] AS [s]
OUTER APPLY sys.dm_db_stats_properties ([s].[object_id],[s].[stats_id]) AS [sp]
WHERE [s].[object_id] = OBJECT_ID(N'<TableName>');









RPO & RTO Difference & Kerberos and NTLM Difference.

Understanding recovery point objective and recovery time objective


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~RPO~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The first one is the recovery point objective, or RPO, and this is the point in time that data can be restored to, so if your most recent backup is a month ago, you are only going to be able to restore to what the database looked like at that time the backup was taken, so you will potentially lose a month of data.
On the other hand, if your most recent backup was taken five minutes ago, then you can restore up to that point in time and you would only lose five minutes of data.

So you need to ask yourself how much data are you willing to lose, are you willing to lose hours of data, minutes of data, seconds of data, or zero data? If you are unwilling to lose any data, if you absolutely positively have to have zero data loss, then backups alone are not going to solve your problem.
 You would need to look into high availability solutions, like clustering, or availability groups. These solutions can get very expensive very quickly, on the other hand, backups are very inexpensive to run, so because of our budget constraints, sometimes we have to live with the fact that we will lose a few minutes of data.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~RTO~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


We also need to think about a recovery time objective, RTO, which is the amount of time taken to do the restore.
A very large database can take several hours or maybe a day to restore, and in some cases this would not be acceptable. Typically, running a restore takes slightly longer than running a backup, so if we do a full backup and that takes 20 minutes to run, when you go to restore that full backup you can expect it to take longer than 20 minutes, but we can't say exactly how long it'll be without testing.
Everyone's hardware is different, everyone's bandwidth is different, so if you want an exact number of how long it's going to take to do a restore on your system, you'll need to run some tests.
 You also need to think about the time to prepare for a restore. So when a disaster happens, you're going to need to take some time to locate the tapes, or locate the files, login to SQL server and kick off the restore process.
Most of those things shouldn't take too long, but some things to be concerned about if you're storing all of your backup tapes off site, then you'll need to call for those tapes, and the tape would have to be returned to your site.

That could take some amount of time. If you're doing a single restore, then you're fine to use the graphical interface.
 If you're going to need to restore multiple backups, then it might be easier to do that with a script. So you have to think how long is it gonna take you to write the script.
 Some of it you could write ahead of time, and if you're doing some testing on a regular basis, you would probably have the basics of a restore script. So again, these are some things to think about

Who decides RPO & RTO
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 The recovery point objective and recovery time objective, ideally, a DBA wouldn't be deciding these things on their own.
You should meet with the head of the IT Department and help you determine what these objectives should be, and also make sure the objectives are clearly communicated throughout the organization, so when a disaster does happen, everyone knows what to expect, they have some expectation of how long it's going to take to do the recovery, and how much data might potentially be lost.

 Kerberos VS NTLM
 --------------------

NTLM Authentication: Challenge- Response mechanism.

In the NTLM protocol, the client sends the user name to the server; the server generates and sends a challenge to the client; the client encrypts that challenge using the user’s password; and the client sends a response to the server.If it is a local user account, server validate user's response by looking into the Security Account Manager; if domain user account, server forward the response to domain controller for validating and retrive group policy of the user account, then construct an access token and establish a session for the use.

Kerberos authentication: Trust-Third-Party Scheme.

Kerberos authentication provides a mechanism for mutual authentication between a client and a server on an open network.
The three heads of Kerberos comprise the Key Distribution Center (KDC), the client user and the server with the desired service to access. The KDC is installed as part of the domain controller and performs two service functions: the Authentication Service (AS) and the Ticket-Granting Service (TGS). When the client user log on to the network, it request a Ticket Grant Ticket(TGT) from the AS in the user's domain; then when client want to access the network resources, it presents the TGT, an authenticator and Server Principal Name(SPN) of the target server, contact the TGS in the service account domain to retrive a session ticket for future communication w/ the network service, once the target server validate the authenticator, it create an access token for the client user.

Combining sys.dm_db_stats_properties & stats_date

Combining sys.dm_db_stats_properties & stats_date

sys.dm_db_stats_properties
Returns properties of statistics for the specified database object (table or indexed view) in the current SQL Server database whereas

stats_date :- 

--> Returns the date of the most recent update for statistics on a table or indexed view.

USE AdventureWorks2012;
GO
SELECT name AS stats_name, 
    STATS_DATE(object_id, stats_id) AS statistics_update_date
FROM sys.stats 
WHERE object_id = OBJECT_ID('Person.Address');
GO


USE AdventureWorks2012;
GO
SELECT name AS index_name, 
    STATS_DATE(object_id, index_id) AS statistics_update_date
FROM sys.indexes 
WHERE object_id = OBJECT_ID('Person.Address');
GO

sys.dm_db_stats_properties 


A. Simple example
The following example returns the statistics for the Person.Person table in the AdventureWorks database.

Returns properties of statistics for the specified database object (table or indexed view) in the current SQL Server database.


SELECT * FROM sys.dm_db_stats_properties (object_id('Person.Person'), 1);


B. Returning all statistics properties for a table

The following example returns properties of all statistics that exist for the table TEST.

SELECT sp.stats_id, name, filter_definition, last_updated, rows, rows_sampled, steps, unfiltered_rows, modification_counter 
FROM sys.stats AS stat 
CROSS APPLY sys.dm_db_stats_properties(stat.object_id, stat.stats_id) AS sp
WHERE stat.object_id = object_id('TEST');


C. Returning statistics properties for frequently modified objects

The following example returns all tables, indexed views, and statistics in the current database for which the leading column was modified more than 1000 times since the last statistics update.


SELECT obj.name, obj.object_id, stat.name, stat.stats_id, last_updated, modification_counter
FROM sys.objects AS obj 
INNER JOIN sys.stats AS stat ON stat.object_id = obj.object_id
CROSS APPLY sys.dm_db_stats_properties(stat.object_id, stat.stats_id) AS sp
WHERE modification_counter > 1000;

D. Returning statistics properties for statistics not updated from last 7 days.

SELECT OBJECT_NAME(id),name,STATS_DATE(id, indid),rowmodctr
FROM sys.sysindexes
WHERE STATS_DATE(id, indid)<=DATEADD(DAY,-7,GETDATE())
AND rowmodctr>0
AND id IN (SELECT object_id FROM sys.tables)

stats_date :- 

Returns the date of the most recent update for statistics on a table or indexed view.


A. Return the dates of the most recent statistics for a table

The following example returns the date of the most recent update for each statistics object on the Person.Address table.


USE AdventureWorks2012;
GO
SELECT name AS stats_name, 
    STATS_DATE(object_id, stats_id) AS statistics_update_date
FROM sys.stats 
WHERE object_id = OBJECT_ID('Person.Address');
GO

If statistics correspond to an index, the stats_id value in the sys.stats catalog view is the same as the index_id value in the sys.indexes catalog view, and the following query returns the same results as the preceding query. If statistics do not correspond to an index, they are in the sys.stats results but not in the sys.indexes results.


USE AdventureWorks2012;
GO

SELECT name AS index_name, 
    STATS_DATE(object_id, index_id) AS statistics_update_date
FROM sys.indexes 
WHERE object_id = OBJECT_ID('Person.Address');
GO
Examples: Azure SQL Data Warehouse and Parallel Data Warehouse

B. Learn when a named statistics was last updated

The following example creates statistics on the LastName column of the DimCustomer table. It then runs a query to show the date of the statistics. Then it udpates the statistics and runs the query again to show the updated date.


--First, create a statistics object

USE AdventureWorksPDW2012;
GO
CREATE STATISTICS Customer_LastName_Stats
ON AdventureWorksPDW2012.dbo.DimCustomer (LastName)
WITH SAMPLE 50 PERCENT;
GO

--Return the date when Customer_LastName_Stats was last updated 

USE AdventureWorksPDW2012;
GO
SELECT stats_id, name AS stats_name, 
    STATS_DATE(object_id, stats_id) AS statistics_date
FROM sys.stats s
WHERE s.object_id = OBJECT_ID('dbo.DimCustomer')
    AND s.name = 'Customer_LastName_Stats';
GO

--Update Customer_LastName_Stats so it will have a different timestamp in the next query  
GO

  UPDATE STATISTICS dbo.dimCustomer (Customer_LastName_Stats);


--Return the date when Customer_LastName_Stats was last updated.  

SELECT stats_id, name AS stats_name, 
    STATS_DATE(object_id, stats_id) AS statistics_date
FROM sys.stats s
WHERE s.object_id = OBJECT_ID('dbo.DimCustomer')
    AND s.name = 'Customer_LastName_Stats';
GO

C. View the date of the last update for all statistics on a table
This example returns the date for when each statistics object on the DimCustomer table was last updated.


--Return the dates all statistics on the table were last updated.

SELECT stats_id, name AS stats_name, 
    STATS_DATE(object_id, stats_id) AS statistics_date
FROM sys.stats s
WHERE s.object_id = OBJECT_ID('dbo.DimCustomer');
GO

If statistics correspond to an index, the stats_id value in the sys.stats catalog view is the same as the index_id value in the sys.indexes catalog view, and the following query returns the same results as the preceding query. If statistics do not correspond to an index, they are in the sys.stats results but not in the sys.indexes results.


USE AdventureWorksPDW2012;
GO
SELECT name AS index_name, 
    STATS_DATE(object_id, index_id) AS statistics_update_date
FROM sys.indexes 
WHERE object_id = OBJECT_ID('dbo.DimCustomer');
GO

Server-level firewall rule versus a database-level firewall rule and how to troubleshoot the database firewall in Azure SQL Databases.

Server-level firewall rule versus a database-level firewall rule and how to troubleshoot the database firewall in Azure


Q. Should users of one database be fully isolated from another database?
If yes, grant access using database-level firewall rules. This avoids using server-level firewall rules, which permit access through the firewall to all databases, reducing the depth of your defenses.

Q. Do users at the IP address’s need access to all databases?
Use server-level firewall rules to reduce the number of times you must configure firewall rules.

Q. Does the person or team configuring the firewall rules only have access through the Azure portal, PowerShell, or the REST API?
You must use server-level firewall rules. Database-level firewall rules can only be configured using Transact-SQL.

Q. Is the person or team configuring the firewall rules prohibited from having high-level permission at the database level?

Use server-level firewall rules. Configuring database-level firewall rules using Transact-SQL, requires at least CONTROL DATABASE permission at the database level.

Q. Is the person or team configuring or auditing the firewall rules, centrally managing firewall rules for many (perhaps 100s) of databases?

This selection depends upon your needs and environment. Server-level firewall rules might be easier to configure, but scripting can configure rules at the database-level. And even if you use server-level firewall rules, you might need to audit the database-firewall rules, to see if users with CONTROL permission on the database have created database-level firewall rules.

Q. Can I use a mix of both server-level and database-level firewall rules?

Yes. Some users, such as administrators might need server-level firewall rules. Other users, such as users of a database application, might need database-level firewall rules.

Troubleshooting the database firewall
----------------------------------------------------------------------------------------
Consider the following points when access to the Microsoft Azure SQL Database service does not behave as you expect:

Local firewall configuration: Before your computer can access Azure SQL Database, you may need to create a firewall exception on your computer for TCP port 1433. If you are making connections inside the Azure cloud boundary, you may have to open additional ports. For more information, see the SQL Database: Outside vs inside section of Ports beyond 1433 for ADO.NET 4.5 and SQL Database.
Network address translation (NAT): Due to NAT, the IP address used by your computer to connect to Azure SQL Database may be different than the IP address shown in your computer IP configuration settings. To view the IP address your computer is using to connect to Azure, log in to the portal and navigate to the Configure tab on the server that hosts your database. Under the Allowed IP Addresses section, the Current Client IP Address is displayed. Click Add to the Allowed IP Addresses to allow this computer to access the server.

Changes to the allow list have not taken effect yet: There may be as much as a five-minute delay for changes to the Azure SQL Database firewall configuration to take effect.
The login is not authorized or an incorrect password was used: If a login does not have permissions on the Azure SQL Database server or the password used is incorrect, the connection to the Azure SQL Database server is denied. Creating a firewall setting only provides clients with an opportunity to attempt connecting to your server; each client must provide the necessary security credentials. For more information about preparing logins, see Managing Databases, Logins, and Users in Azure SQL Database.
Dynamic IP address: If you have an Internet connection with dynamic IP addressing and you are having trouble getting through the firewall, you could try one of the following solutions:

Ask your Internet Service Provider (ISP) for the IP address range assigned to your client computers that access the Azure SQL Database server, and then add the IP address range as a firewall rule.
Get static IP addressing instead for your client computers, and then add the IP addresses as firewall rules.

Monitor System Activity Using Extended Events

Monitor System Activity Using Extended Events
------------------------------------------------------------


CREATE EVENT SESSION test0
ON SERVER
ADD EVENT sqlserver.checkpoint_begin,
ADD EVENT sqlserver.checkpoint_end
WITH (MAX_DISPATCH_LATENCY = 1 SECONDS)
go

--Add the bucketing target with 32 buckets to count the number of checkpoints based on the database --ID.

ALTER EVENT SESSION test0
ON SERVER
ADD TARGET package0.histogram
(
      SET slots = 32, filtering_event_name = 'sqlserver.checkpoint_end', source_type = 0, source = 'database_id'
)
go
--Issue the following statements to add the ETW target. This will enable you to see the begin and end --events, which is used to determine how long the checkpoint takes.


ALTER EVENT SESSION test0
ON SERVER
ADD TARGET package0.etw_classic_sync_target
go

--Issue the following statements to start the session and begin event collection.
ALTER EVENT SESSION test0
ON SERVER
STATE = start
go

USE tempdb
      checkpoint
go

--Issue the following statements to cause three events to fire.
USE master
      checkpoint
      checkpoint
go
--Issue the following statements to view the event counts.
SELECT CAST(xest.target_data AS xml) Bucketizer_Target_Data_in_XML
FROM sys.dm_xe_session_targets xest
JOIN sys.dm_xe_sessions xes ON xes.address = xest.event_session_address
JOIN sys.server_event_sessions ses ON xes.name = ses.name
WHERE xest.target_name = 'histogram' AND xes.name = 'test0'
go


I  have executed checkpoint 21 times, hence output is like
You will notice Slot count = 21. Means this capture how may times checkpoint has run on the sql ser ver.


Thanks for Reading..


Backup encryption - Transparent data encryption - Always Encrypted -Column level encryption differences

This blog post covers


1.Backup Encryption
2.TDE
3.Always Encryption
4. Cell level & column level encryption

5. To encrypt a column of data using symmetric encryption that includes an authenticate

6.its relevant questions for practice.



~~~~~~~~~~~~~~~1.Backup Encryption~~~~~~~~~~~~~

-- Executed on SOURCE
-- Create a database "master" key.
USE Master;
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'els2234234ljljasdfLJLJ23'
GO 

CREATE CERTIFICATE CryptoBackup
WITH SUBJECT = 'CryptoBackup',
 EXPIRY_DATE = '20301031';
GO 

-- Does not have to be same password as "master" key.

BACKUP CERTIFICATE CryptoBackup TO FILE = 'C:\Temp\CertBackup.cer'
  WITH PRIVATE KEY (FILE = 'C:\Temp\CertBackup.pvk',
  ENCRYPTION BY PASSWORD = 'els2234234ljljasdfLJLJ23')
GO

BACKUP DATABASE [ToEncrypt]
TO DISK = N'C:\Temp\ToEncrypt.bak'
WITH
  COMPRESSION,
  ENCRYPTION  
   (
   ALGORITHM = AES_256,
   SERVER CERTIFICATE = CryptoBackup
   ),
  STATS = 10
GO 

---------------------------
-- Executed on TARGET
-- Create a database "master" key.
USE Master;
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'els2234234ljljasdfLJLJ23'
GO

-- Copy Certificate to target then execute create command
CREATE CERTIFICATE CryptoBackup
    FROM FILE = 'C:\Temp\CertBackup.cer'
    WITH PRIVATE KEY (FILE = 'C:\Temp\CertBackup.pvk',
    DECRYPTION BY PASSWORD = 'els2234234ljljasdfLJLJ23');
GO

-- Copy bak to TARGET then Restore database from encrypted backup using Certificate
USE [master]
RESTORE DATABASE [ToEncrypt]
 FROM DISK = N'C:\Temp\ToEncrypt.bak' WITH  FILE = 1, NOUNLOAD,  REPLACE,  STATS = 5
GO

-- Error if missing the certificate
Msg 33111, Level 16, State 3, Line 56
Cannot find server certificate with thumbprint '0xA70B9D1DC7B5ECFF4B84205340987FA692EA2889'.
Msg 3013, Level 16, State 1, Line 56
RESTORE DATABASE is terminating abnormally.


~~~~~~~~~~2.TDE ~~~~~~~~~~~~~


USE Master;
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '*rt@40(FL&dasl1';
GO
CREATE CERTIFICATE TestSQLServerCert
WITH SUBJECT = 'Certificate to protect TDE key',

 EXPIRY_DATE = '20301031';
GO

-- Create a database to be protected by TDE.
CREATE DATABASE CustRecords;
GO
-- Create a database encryption key, that is protected by the server certificate in the master database.
USE CustRecords;
GO
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_128
ENCRYPTION BY SERVER CERTIFICATE TestSQLServerCert;
GO
-- Create a backup of the server certificate in the master database.
-- The following code stores the backup of the certificate and the private key file in the default data location for this instance of SQL Server
-- (C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA).
Use master
go
BACKUP CERTIFICATE TestSQLServerCert
TO FILE = 'C:\Temp\SQLServerCertificate'
WITH PRIVATE KEY
(
    FILE = 'C:\Temp\SQLServerPrivateKeyFile',
    ENCRYPTION BY PASSWORD = '*rt@40(FL&dasl1'
);

-- Switch to the new database.
-- Alter the new database to encrypt the database using TDE.
use CustRecords
go
ALTER DATABASE CustRecords SET ENCRYPTION ON;
GO



~~~~~~~3.Always Encrypted~~~~~

If exists drop otherwise ignore

USE [master]
/****** Object:  ColumnEncryptionKey [MyCEK]    Script Date: 21-03-2018 12:27:57 ******/
DROP COLUMN ENCRYPTION KEY [MyCEK]
GO




USE [master]
/****** Object:  ColumnMasterKey [MyCMK]    Script Date: 21-03-2018 12:27:09 ******/
DROP COLUMN MASTER KEY [MyCMK]
GO


The following Transact-SQL creates 
1.column master key metadata, 
2.column encryption key metadata, 
3.A table with encrypted columns.

  • ## Always Encrypted Terms
  • -- Deterministic encryption uses a method which always generates the same encrypted value for any given plain text value. Using deterministic encryption allows grouping, filtering by equality, and joining tables based on encrypted values, but can also allow unauthorized users to guess information about encrypted values by examining patterns in the encrypted column. This weakness is increased when there is a small set of possible encrypted values, such as True/False, or North/South/East/West region. Deterministic encryption must use a column collation with a binary2 sort order for character columns.
  • -- Randomized encryption uses a method that encrypts data in a less predictable manner. Randomized encryption is more secure, but prevents equality searches, grouping, indexing, and joining on encrypted columns.

  • -- Column master keys are protecting keys used to encrypt column encryption keys. Column master keys must be stored in a trusted key store. Information about column master keys, including their location, is stored in the database in system catalog views.
  • -- Column encryption keys are used to encrypt sensitive data stored in database columns. All values in a column can be encrypted using a single column encryption key. Encrypted values of column encryption keys are stored in the database in system catalog views. You should store column encryption keys in a secure/trusted location for backup. 
CREATE COLUMN MASTER KEY MyCMK  
WITH (  
     KEY_STORE_PROVIDER_NAME = 'MSSQL_CERTIFICATE_STORE',   
     KEY_PATH = 'Current User/Personal/f2260f28d909d21c642a3d8e0b45a830e79a1420'  
   );  
---------------------------------------------  
CREATE COLUMN ENCRYPTION KEY MyCEK   
WITH VALUES  
(  
    COLUMN_MASTER_KEY = MyCMK,   
    ALGORITHM = 'RSA_OAEP',   
    ENCRYPTED_VALUE = 0x01700000016C006F00630061006C006D0061006300680069006E0065002F006D0079002F003200660061006600640038003100320031003400340034006500620031006100320065003000360039003300340038006100350064003400300032003300380065006600620063006300610031006300284FC4316518CF3328A6D9304F65DD2CE387B79D95D077B4156E9ED8683FC0E09FA848275C685373228762B02DF2522AFF6D661782607B4A2275F2F922A5324B392C9D498E4ECFC61B79F0553EE8FB2E5A8635C4DBC0224D5A7F1B136C182DCDE32A00451F1A7AC6B4492067FD0FAC7D3D6F4AB7FC0E86614455DBB2AB37013E0A5B8B5089B180CA36D8B06CDB15E95A7D06E25AACB645D42C85B0B7EA2962BD3080B9A7CDB805C6279FE7DD6941E7EA4C2139E0D4101D8D7891076E70D433A214E82D9030CF1F40C503103075DEEB3D64537D15D244F503C2750CF940B71967F51095BFA51A85D2F764C78704CAB6F015EA87753355367C5C9F66E465C0C66BADEDFDF76FB7E5C21A0D89A2FCCA8595471F8918B1387E055FA0B816E74201CD5C50129D29C015895CD073925B6EA87CAF4A4FAF018C06A3856F5DFB724F42807543F777D82B809232B465D983E6F19DFB572BEA7B61C50154605452A891190FB5A0C4E464862CF5EFAD5E7D91F7D65AA1A78F688E69A1EB098AB42E95C674E234173CD7E0925541AD5AE7CED9A3D12FDFE6EB8EA4F8AAD2629D4F5A18BA3DDCC9CF7F352A892D4BEBDC4A1303F9C683DACD51A237E34B045EBE579A381E26B40DCFBF49EFFA6F65D17F37C6DBA54AA99A65D5573D4EB5BA038E024910A4D36B79A1D4E3C70349DADFF08FD8B4DEE77FDB57F01CB276ED5E676F1EC973154F86  
);  
---------------------------------------------  
CREATE TABLE Customers (  
    CustName nvarchar(60)   
        COLLATE  Latin1_General_BIN2 ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = MyCEK,  
        ENCRYPTION_TYPE = RANDOMIZED,  
        ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256'),   
    SSN varchar(11)   
        COLLATE  Latin1_General_BIN2 ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = MyCEK,  
        ENCRYPTION_TYPE = DETERMINISTIC ,  
        ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256'),   
    Age int NULL  
);  
GO 


4.Encrypt a Column of Data

--------------------------------------



 Encrypt a Column of Data


How to encrypt a column of data by using symmetric encryption in SQL Server 2017 using Transact-SQL. This is sometimes known as column-level encryption, or cell-level encryption.

--You must have a database master key
-- This is used to encrypt & decrypt the key


CREATE MASTER KEY ENCRYPTION BY
PASSWORD = '<some strong password>';

-- Create a certificate
-- This is used by SQL Server to encrypt & decrypt the data when it is used

USE AdventureWorks2012;
GO

CREATE CERTIFICATE Sales09
   WITH SUBJECT = 'Customer Credit Card Numbers',
 EXPIRY_DATE = '20301031';
GO
--Create a symmetric key
  --( This is used by the certificate to encrypt & decrypt the data when it is accesses)
CREATE SYMMETRIC KEY CreditCards_Key11
    WITH ALGORITHM = AES_256
    ENCRYPTION BY CERTIFICATE Sales09;
GO

-- Create a column in which to store the encrypted data.
ALTER TABLE Sales.CreditCard
    ADD CardNumber_Encrypted varbinary(128);
GO

-- Open the symmetric key with which to encrypt the data.
OPEN SYMMETRIC KEY CreditCards_Key11
   DECRYPTION BY CERTIFICATE Sales09;

-- Encrypt the value in column CardNumber using the
-- symmetric key CreditCards_Key11.
-- Save the result in column CardNumber_Encrypted.
UPDATE Sales.CreditCard
SET CardNumber_Encrypted = EncryptByKey(Key_GUID('CreditCards_Key11')
    , CardNumber, 1, HashBytes('SHA1', CONVERT( varbinary
    , CreditCardID)));
GO

-- Verify the encryption.
-- First, open the symmetric key with which to decrypt the data.

OPEN SYMMETRIC KEY CreditCards_Key11
   DECRYPTION BY CERTIFICATE Sales09;
GO

-- Now list the original card number, the encrypted card number,
-- and the decrypted ciphertext. If the decryption worked,
-- the original number will match the decrypted number.

SELECT CardNumber, CardNumber_Encrypted
    AS 'Encrypted card number', CONVERT(nvarchar,
    DecryptByKey(CardNumber_Encrypted, 1 ,
    HashBytes('SHA1', CONVERT(varbinary, CreditCardID))))
    AS 'Decrypted card number' FROM Sales.CreditCard;
GO


To encrypt a column of data using symmetric encryption that includes an authenticator


   In Object Explorer, connect to an instance of Database Engine.
   On the Standard bar, click New Query.
   Copy and paste the following example into the query window and click Execute.

USE AdventureWorks2012;
GO

CREATE CERTIFICATE HumanResources037
   WITH SUBJECT = 'Employee Social Security Numbers',
    EXPIRY_DATE = '20301031';
GO

CREATE SYMMETRIC KEY SSN_Key_01
    WITH ALGORITHM = AES_256
    ENCRYPTION BY CERTIFICATE HumanResources037;
GO

USE [AdventureWorks2012];
GO

-- Create a column in which to store the encrypted data.
ALTER TABLE HumanResources.Employee
    ADD EncryptedNationalIDNumber varbinary(128);
GO

-- Open the symmetric key with which to encrypt the data.
OPEN SYMMETRIC KEY SSN_Key_01
   DECRYPTION BY CERTIFICATE HumanResources037;

-- Encrypt the value in column NationalIDNumber with symmetric
-- key SSN_Key_01. Save the result in column EncryptedNationalIDNumber.
UPDATE HumanResources.Employee
SET EncryptedNationalIDNumber = EncryptByKey(Key_GUID('SSN_Key_01'), NationalIDNumber);
GO

-- Verify the encryption.
-- First, open the symmetric key with which to decrypt the data.
OPEN SYMMETRIC KEY SSN_Key_01
   DECRYPTION BY CERTIFICATE HumanResources037;
GO

-- Now list the original ID, the encrypted ID, and the
-- decrypted ciphertext. If the decryption worked, the original
-- and the decrypted ID will match.
SELECT NationalIDNumber, EncryptedNationalIDNumber
    AS 'Encrypted ID Number',
    CONVERT(nvarchar, DecryptByKey(EncryptedNationalIDNumber))
    AS 'Decrypted ID Number'
    FROM HumanResources.Employee;

GO

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Q.Which of the following option should you use with the Backup to ensure that your backup will be encrypted?

1. with description
2.with checksum
3.with server certificate
4. with encryption

Ans : kindly reply, if you get correct answer.

Q2.You have on premise sql server that has a user database name HR_Applications you configured column based cell level encryption on the database.
You need to backup the keys that was used to perform the encryption. Your solution must maximize the security.. What should you do?
Ans
1. Backup the user database
2. Backup the key to an NTFS directory that is protected by access control list.
3. BAckup the model database
4.Store the key in the system registry key.

Ans:- : kindly reply, if you get correct answer.








Script to check which command currently executing using sys.dm_exec_requests



Script to check which command currently executing  from  sys.dm_exec_requests
----------------------------------------------------------------------------------------------------

Below script is very helpful in case if you need to know % of backup done and its ETA and also other DBCC commands..


SELECT r.session_id,r.command,CONVERT(NUMERIC(6,2),r.percent_complete)
AS [Percent Complete],CONVERT(VARCHAR(20),DATEADD(ms,r.estimated_completion_time,GetDate()),20) AS [ETA Completion Time],
CONVERT(NUMERIC(10,2),r.total_elapsed_time/1000.0/60.0) AS [Elapsed Min],
CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0) AS [ETA Min],
CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0/60.0) AS [ETA Hours],
CONVERT(VARCHAR(1000),(SELECT SUBSTRING(text,r.statement_start_offset/2,
CASE WHEN r.statement_end_offset = -1 THEN 1000 ELSE (r.statement_end_offset-r.statement_start_offset)/2 END)
FROM sys.dm_exec_sql_text(sql_handle)))
FROM sys.dm_exec_requests r WHERE command like '%DBCC%'

 or

SELECT r.session_id,r.command,CONVERT(NUMERIC(6,2),r.percent_complete)
AS [Percent Complete],CONVERT(VARCHAR(20),DATEADD(ms,r.estimated_completion_time,GetDate()),20) AS [ETA Completion Time],
CONVERT(NUMERIC(10,2),r.total_elapsed_time/1000.0/60.0) AS [Elapsed Min],
CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0) AS [ETA Min],
CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0/60.0) AS [ETA Hours],
CONVERT(VARCHAR(1000),(SELECT SUBSTRING(text,r.statement_start_offset/2,
CASE WHEN r.statement_end_offset = -1 THEN 1000 ELSE (r.statement_end_offset-r.statement_start_offset)/2 END)
FROM sys.dm_exec_sql_text(sql_handle)))
FROM sys.dm_exec_requests r WHERE command IN ('RESTORE DATABASE','BACKUP DATABASE')

Enable SQL Server Managed Backup in Microsoft Azure Storage Account

How to enable SQL Server Managed Backup in Microsoft Azure storage Account using Powershell and SSMS.

==============================================

Step by Step configuration for SQL Server Managed Backup  and Restore database from Azure storage

Here we are  exploring manual using URL and automatic database backup using Microsoft Azure storage in SQL Server 2016 & SQL Server 2017

SQL Server Managed Backup to  Azure manages and automates SQL Server backups to Microsoft Azure Blob storage. This feature was introduced in SQL Server 2014 called smart backup now it has been revamped in SQL Server 2016 onward called Managed Backup

Managed Backup – Enhancement

Managed backup was introduced in SQL Server 2014, and allowed an automatic database backup to  Microsoft Azure, based on changes done in the database. This feature schedules, performs and maintains the backups–all a DBA needs to do is specify a retention period. In SQL Server 2014, there was not much of control for frequency. In SQL Server 2016, this has been enhanced:

System databases can be backed up.
Backup of databases in simple recovery model is possible.
A backup schedule can be customized based on business need rather than log usage.
There are many objects added in MSDB database in SQL Server 2016 to manage them. They are located under new schema called managed_backup. We can run the below query to find all objects under the new schema.

SELECT name, type_desc FROM msdb.sys.objects
WHERE SCHEMA_ID = SCHEMA_ID('managed_backup')


output:-

name type_desc
sp_backup_config_basic SQL_STORED_PROCEDURE
sp_backup_config_advanced SQL_STORED_PROCEDURE
sp_backup_config_schedule SQL_STORED_PROCEDURE
sp_create_job SQL_STORED_PROCEDURE
sp_add_task_command SQL_STORED_PROCEDURE
sp_backup_on_demand SQL_STORED_PROCEDURE
sp_set_parameter SQL_STORED_PROCEDURE
sp_get_backup_diagnostics SQL_STORED_PROCEDURE
sp_backup_master_switch SQL_STORED_PROCEDURE
sp_get_encryption_option SQL_STORED_PROCEDURE
sp_get_striping_option SQL_STORED_PROCEDURE
sp_do_backup SQL_STORED_PROCEDURE
fn_backup_db_config SQL_TABLE_VALUED_FUNCTION
fn_backup_instance_config SQL_TABLE_VALUED_FUNCTION
fn_is_master_switch_on SQL_SCALAR_FUNCTION
fn_get_parameter SQL_TABLE_VALUED_FUNCTION
fn_available_backups SQL_TABLE_VALUED_FUNCTION
UQ__fn_avail__FED62726584BD6B2 UNIQUE_CONSTRAINT
fn_get_current_xevent_settings SQL_TABLE_VALUED_FUNCTION
fn_get_health_status SQL_TABLE_VALUED_FUNCTION



Benefits
 Currently automating backups for multiple databases requires developing a backup strategy, writing custom code, and scheduling backups. Using SQL Server Managed Backup to Windows Azure, you can create a backup plan by specifying only the retention period and storage location.

 SQL Server Managed Backup to Windows Azure schedules, performs, and maintains the backups and hence called Smart Backup.

SQL Server Managed Backup to Azure storage can be configured at the database level or at the SQL Server instance level. When configuring at the instance level, any new databases are also backed up automatically.

You can also encrypt the backups for additional security, and you can set up a custom schedule (however not required as it ensures Point in time restore) to control when the backups are taken.

Prerequisites

·        Microsoft Azure account
·        Azure storage account
·        MS SQL Server 2016 / 2017
.         SQl Server Agent
.        SSMS - 17.X
.         DB Server should be on Internet connection
.         system clock should be correct


Backup Strategy

 ·        Full Database Backup

 Managed backup schedules a full database backup if any of the following is true:

o   A database is SQL Server Managed Backup to Windows Azure enabled for the first time.
o   The log growth since last full database backup is equal to or larger than 1 GB.
o   The maximum time interval of one week has passed since the last full database backup.
o   The log chain is broken. The most common reason for log chain breaks is probably a backup command issued using Transact-SQL or through the Backup task in SQL Server Management Studio

·        Transaction Log Backup

SQL Server Managed Backup to Windows Azure schedules a log backup if any of the following is true:
o   There is no log backup history that can be found. This is usually true when SQL Server Managed Backup to Windows Azure is enabled for the first time.
o   The transaction log space used is 5 MB or larger.
o   The maximum time interval of 2 hours since the last log backup is reached.
o   Any time the transaction log backup is lagging behind a full database backup. The goal is to keep the log chain ahead of full backup.


Ø Now let’s configure SQL Server managed backup to Microsoft Azure Blob storage:-

================================================================

   1.Connect to a Microsoft Azure Subscription using powershell and configured
          Azure  Storage
          Container
          Container URL
          SAS token and how to generate on demand primary & Secondary Storage key.
   2.SQL Server Backup to URL
   3.Managed Backup to Microsoft Azure
   4.Restoring From Backups Stored in Microsoft Azure
   5.File-Snapshot Backups for Database Files in Azure


So lets first start Enable and Configure SQL Server Managed Backup to Microsoft Azure with Default Settings

Create the Azure Blob Container

you can use the Azure Management Portal or Azure PowerShell to create the storage account. The following
New-AzureStorageAccount command creates a storage account named rakmanagedbackupstorage in the West US region.

PowerShell or https://shell.azure.com
============
Create a Resource group
------------------------------

New-AzureRmResourceGroup -Name demomanagedRG -Location "West US" -Tag @{Empty=$null; Department="Home"}

Create a Storage Account
----------------------------------

New-AzureRmStorageAccount -ResourceGroupName "demomanagedRG" -AccountName "rakmanagedbackupstorage" -Location "West US" -SkuName "Standard_LRS"



Create a blob container for the backup files: You can create a blob container in the Azure Management Portal or with Azure PowerShell.
The following New-AzureStorageContainer command creates a blob container named rakbackupcontainer in the rakmanagedbackupstorage storage account.

Create a storage context
-------------------------------

$context = New-AzureStorageContext -StorageAccountName rakmanagedbackupstorage -StorageAccountKey (Get-AzureRMStorageAccountKey -StorageAccountName rakmanagedbackupstorage -ResourceGroupName demomanagedRG).Value[0]

 Create a Container
--------------------------
New-AzureStorageContainer -Name rakbackupcontainer -Context $context

Generate a Shared Access Signature (SAS):
-------------------------------------------------------

 To access the container, you must create a SAS. This can be done in some tools, code, and Azure PowerShell. The following New-AzureStorageContainerSASToken command creates SAS token for the rakbackupcontainer blob container that expires in one year.

Lets Generate a Shared Access Signature (SAS)

New-AzureStorageContainerSASToken -Name rakbackupcontainer -Permission rwdl -ExpiryTime (Get-Date).AddYears(1) -FullUri -Context $context

The output for this command will contain both the URL to the container and the SAS token. The following is an example:


https://rakmanagedbackupstorage.blob.core.windows.net/rakbackupcontainer?sv=2017-04-17&sr=c&sig=w%2BbRLmeIxsBIx6obOSsvWWm%2FIB0F1zVVMM8LMKExF8E%3D&se=2019-03-13T15%3A45%3A54Z&sp=rwdl


In the previous example, separate the container URL from the SAS token at the question mark (do not include the question mark. For example, the previous output would result in the following two values.

Container URL: https://rakmanagedbackupstorage.blob.core.windows.net/rakbackupcontainer

SAS token: sv=2017-04-17&sr=c&sig=w%2BbRLmeIxsBIx6obOSsvWWm%2FIB0F1zVVMM8LMKExF8E%3D&se=2019-03-13T15%3A45%3A54Z&sp=rwdl
Record the container URL and SAS for use in creating a SQL CREDENTIAL.
For more information about SAS, see Shared Access Signatures, Part 1: Understanding the SAS Model.

Using powershell if you want to get Access key for storage account use below command

(Get-AzureRMStorageAccountKey -StorageAccountName rakmanagedbackupstorage -ResourceGroupName demomanagedRG).Value[0]

output of this above powershell command will go to the Account key

Put Storage account Name & Account key to remain connected to Microsoft Azure Storage
-----------------------------------------------------------------------------------------------------------



Create a credential with help of these details, then only backup to URL will work.



CREATE CREDENTIAL [https://rakmanagedbackupstorage.blob.core.windows.net/rakbackupcontainer]
WITH IDENTITY = 'Shared Access Signature',
SECRET = 'sv=2017-04-17&sr=c&sig=w%2BbRLmeIxsBIx6oXXXXXm%2FIB0F1zVVMM8LMKExF8E%3D&se=2019-03-13T15%3A45%3A54Z&sp=rwdl'

You will notice an credential gets created in SSMS




Backup to URL
==============

BACKUP DATABASE [master] TO  URL = N'https://rakmanagedbackupstorage.blob.core.windows.net/rakbackupcontainer/master_backup_2018_03_13_175305.bak'
 WITH NOFORMAT, NOINIT,  NAME = N'master-Full Database Backup', NOSKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO


Enable SQL Server Managed Backup to Microsoft Azure
====================================================
Create a SQL Credential for the SAS URL: Use the SAS token to create a SQL Credential for the blob container URL. In SQL Server Management Studio, use the following Transact-SQL query to create the credential for your blob container URL based on the following example:


Ensure SQL Server Agent service is Started and Running: Start SQL Server Agent if it is not currently running.
 SQL Server Managed Backup to Microsoft Azure requires SQL Server Agent to be running on the instance to perform backup operations. You may want to set SQL Server Agent to run automatically to make sure that backup operations can occur regularly.


CREATE CREDENTIAL [https://rakmanagedbackupstorage.blob.core.windows.net/rakbackupcontainer]
WITH IDENTITY = 'Shared Access Signature',
SECRET = 'sv=2017-04-17&sr=c&sig=w%2BbRLmeIxsBIx6oXXXXXm%2FIB0F1zVVMM8LMKExF8E%3D&se=2019-03-13T15%3A45%3A54Z&sp=rwdl'


Determine the retention period: Determine the retention period for the backup files. The retention period is specified in days and can range from 1 to 30.

Enable and configure SQL Server Managed Backup to Microsoft Azure : Start SQL Server Management Studio and connect to the target SQL Server instance.
From the query window run the following statement after you modify the values for the database name, container url, and retention period per your requirements:

Important

-->To enable managed backup at the instance level, specify NULL for the database_name parameter.


Here i am taking Backup of only one database rakeshmydb

Use msdb;
GO
EXEC msdb.managed_backup.sp_backup_config_basic
 @enable_backup = 1,
 @database_name = 'rakeshmydb',
 @container_url = 'https://rakmanagedbackupstorage.blob.core.windows.net/rakbackupcontainer',
 @retention_days = 30
GO


for all databases...
-----------------------

Use msdb;
GO
EXEC msdb.managed_backup.sp_backup_config_basic

 @enable_backup = 1,
 @database_name= NULL,
  @container_url = 'https://rakmanagedbackupstorage.blob.core.windows.net/rakbackupcontainer',
 @retention_days = 30
GO

Note:-
When enabled on instance level, it is only applying the configuration to newly created databases. If you want the settings to apply for existing databases, then you need to enable it on each database 

To check the settings for each database, run the following T-SQL:



Use msdb;

SELECT * FROM managed_backup.fn_backup_db_config (NULL); 

The following example returns the SQL Server Managed Backup to Microsoft Azure default configuration settings for the instance it is executed on:

Use msdb;
GO
SELECT * FROM managed_backup.fn_backup_instance_config ();

Configures automated or custom scheduling options for SQL Server Managed Backup to Microsoft Azure.

USE msdb;
GO
EXEC managed_backup.sp_backup_config_schedule
     @database_name =  'rakeshmydb'
    ,@scheduling_option = 'Custom'
    ,@full_backup_freq_type = 'daily'
    --,@days_of_week = 'Tuesday'
    ,@backup_begin_time =  '17:30'
    ,@backup_duration = '02:00'
    ,@log_backup_freq = '00:05'
GO

SQL Server Managed Backup to Microsoft Azure is now enabled on the database you specified. It may take up to 15 minutes for the backup operations on the database to start to run.

Review Extended Event Default Configuration: Review the Extended Event settings by running the following transact-SQL statement.


SELECT * FROM msdb.managed_backup.fn_get_current_xevent_settings()

You should see that Admin, Operational, and Analytical channel events are enabled by default and cannot be disabled. This should be sufficient to monitor the events that require manual intervention.
You can enable debug events, but the debug channels include informational and debug events that SQL Server Managed Backup to Microsoft Azure uses to detect issues and solve them.

Enable and Configure Notification for Health Status: SQL Server Managed Backup to Microsoft Azure has a stored procedure that creates an agent job to send out e-mail notifications of errors or warnings that may require attention. The following steps describe the process to enable and configure e-mail notifications:

Setup Database Mail if it is not already enabled on the instance. For more information, see Configure Database Mail.

Configure SQL Server Agent Notification to use Database Mail. For more information, see Configure SQL Server Agent Mail to Use Database Mail.

Enable e-mail notifications to receive backup errors and warnings: From the query window, run the following Transact-SQL statements:



EXEC msdb.managed_backup.sp_set_parameter
@parameter_name = 'SSMBackup2WANotificationEmailIds',
@parameter_value = 'kushagra.rakesh@gmail.com'

View backup files in the Microsoft Azure Storage Account: Connect to the storage account from SQL Server Management Studio or the Azure Management Portal. You will see any backup files in the container you specified. Note that you might see a database and a log backup within 5 minutes of enabling SQL Server Managed Backup to Microsoft Azure for the database.

Monitor the Health Status: You can monitor through e-mail notifications you configured previously, or actively monitor the events logged. The following are some example Transact-SQL Statements used to view the events:



--  view all admin events
Use msdb;
Go
DECLARE @startofweek datetime
DECLARE @endofweek datetime
SET @startofweek = DATEADD(Day, 1-DATEPART(WEEKDAY, CURRENT_TIMESTAMP), CURRENT_TIMESTAMP)
SET @endofweek = DATEADD(Day, 7-DATEPART(WEEKDAY, CURRENT_TIMESTAMP), CURRENT_TIMESTAMP)

DECLARE @eventresult TABLE
(event_type nvarchar(512),
event nvarchar (512),
timestamp datetime
)

INSERT INTO @eventresult

EXEC managed_backup.sp_get_backup_diagnostics @begin_time = @startofweek, @end_time = @endofweek

SELECT * from @eventresult
WHERE event_type LIKE '%admin%'


-- To enable debug events

Use msdb;
Go
         EXEC managed_backup.sp_set_parameter 'FileRetentionDebugXevent', 'True'


--  View all events in the current week

Use msdb;
Go
DECLARE @startofweek datetime
DECLARE @endofweek datetime
SET @startofweek = DATEADD(Day, 1-DATEPART(WEEKDAY, CURRENT_TIMESTAMP), CURRENT_TIMESTAMP)
SET @endofweek = DATEADD(Day, 7-DATEPART(WEEKDAY, CURRENT_TIMESTAMP), CURRENT_TIMESTAMP)

EXEC managed_backup.sp_get_backup_diagnostics @begin_time = @startofweek,
 @end_time = @endofweek;

The steps described in this section are specifically for configuring SQL Server Managed Backup to Microsoft Azure for the first time on the database. You can modify the existing configurations using the same system stored procedures and provide the new values.

To disable SQL Server Managed Backup for a database 'rakeshmydb' to Microsoft Azure default configuration settings:

EXEC msdb.managed_backup.sp_backup_config_basic
                @database_name = 'rakeshmydb'
                ,@enable_backup = 0;
GO

In order to disable all databases :-

-- Create a working table to store the database names
Declare @DBNames TABLE

       (
             RowID int IDENTITY PRIMARY KEY
             ,DBName varchar(500)

       )
-- Define the variables
DECLARE @rowid int
DECLARE @dbname varchar(500)
DECLARE @SQL varchar(2000)
-- Get the database names from the system function

INSERT INTO @DBNames (DBName)

SELECT db_name
       FROM 

       msdb.managed_backup.fn_backup_db_config (NULL)
       WHERE is_managed_backup_enabled = 1
       AND is_dropped = 0

       --Select DBName from @DBNames

       select @rowid = min(RowID)
       FROM @DBNames

       WHILE @rowID IS NOT NULL
       Begin

             Set @dbname = (Select DBName From @DBNames Where RowID = @rowid)
             Begin
             Set @SQL = 'EXEC msdb.managed_backup.sp_backup_config_basic 
                @database_name= '''+'' + @dbname+ ''+''', 
                @enable_backup=0'

            EXECUTE (@SQL)

             END
             Select @rowid = min(RowID)
             From @DBNames Where RowID > @rowid

       END

Then review the configuration settings for all the databases on the instance, use the following query:-

Use msdb;
GO
SELECT * FROM managed_backup.fn_backup_db_config (NULL);
GO


To Pause SQL Server Managed Backup to Microsoft Azure Services Using Transact-SQL:
Use msdb;
GO
EXEC managed_backup.sp_backup_master_switch @new_state=0;
Go

To resume SQL Server Managed Backup to Microsoft Azure Using Transact-SQL

Use msdb;
Go
EXEC managed_backup.sp_backup_master_switch @new_state=1;
GO


The following example lists all the available backups backed up through SQL Server Managed Backup to Microsoft Azure for the database ‘rakeshmydb’

SELECT *
FROM managed_backup.fn_available_backups ('rakeshmydb')




The following example returns aggregated error counts for the last 30 minutes from the time it was executed.

SELECT *
FROM managed_backup.fn_get_health_status(NULL, NULL)



The following example returns the aggregated error counts for the current week:

Use msdb
Go
DECLARE @startofweek datetime
DECLARE @endofweek datetime
SET @startofweek = DATEADD(Day, 1-DATEPART(WEEKDAY, CURRENT_TIMESTAMP), CURRENT_TIMESTAMP)
SET @endofweek = DATEADD(Day, 7-DATEPART(WEEKDAY, CURRENT_TIMESTAMP), CURRENT_TIMESTAMP)
SELECT *
FROM managed_backup.fn_get_health_status(@startofweek, @endofweek)

The following example returns the SQL Server Managed Backup to Microsoft Azure default configuration settings for the instance it is executed on:

Use msdb;
GO
SELECT * FROM managed_backup.fn_backup_instance_config ();


To review the configuration settings for all the databases on the instance, use the following query:

Use msdb;  
GO  
SELECT * FROM managed_backup.fn_backup_db_config (NULL);  

GO


Question:- DB1 is a database that is hosted on an instance of SQL Server 2016. You are asked to run full and T-log backups from the  database to Microsoft Azure
You configure the Azure and SQL Server prerequsites for SQL Server Managed Backup to Micrsoft Azure.
You enable Database Mail on the server Instance.
You need to enable email notifications of errors an warning generated during backups. EMail should be sent to
your email address.
Which function or procedure should you run?

 1.Managed_backup.sp_backup_config_advaned
 2.Managed_backup.fn_backup_db_config
 3.Managed_backup.sp_set_parameter
 4.Managed_backup.sp_back_master_switch


SQL Server Backup to URL Best Practices and Troubleshooting

https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/sql-server-backup-to-url-best-practices-and-troubleshooting?view=sql-server-2017

Below topics coming soon.

   4.Restoring From Backups Stored in Microsoft Azure storage.
   5.File-Snapshot Backups for Database Files in Azure

You can refer this link too:-

https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/sql-server-backup-to-url?view=sql-server-2017

How to enable query store and its different Configuration Options

Different query store Views-Stored procedure and Alter commands collective for Query store and its different Configuration Options & How does Query store Works.

Views

The following diagram shows Query Store views and their logical relationships, with compile time information presented as blue entities:

query-store-process-2views


View Descriptions
ViewDescription
sys.query_store_query_textPresents unique query texts executed against the database. Comments and spaces before and after the query text are ignored. Comments and spaces inside text are not ignored. Every statement in the batch generates a separate query text entry.
sys.query_context_settingsPresents unique combinations of plan affecting settings under which queries are executed. The same query text executed with different plan affecting settings produces separate query entry in the Query Store because context_settings_id is part of the query key.
sys.query_store_queryQuery entries that are tracked and forced separately in the Query Store. A single query text can produce multiple query entries if it is executed under different context settings or if it is executed outside vs. inside of different Transact-SQL modules (stored procedures, triggers, etc.).
sys.query_store_planPresents estimated plan for the query with the compile time statistics. Stored plan is equivalent to one that you would get by using SET SHOWPLAN_XML ON.
sys.query_store_runtime_stats_intervalQuery Store divides time into automatically generated time windows (intervals) and stores aggregated statistics on that interval for every executed plan. The size of the interval is controlled by the configuration option Statistics Collection Interval (in Management Studio) or INTERVAL_LENGTH_MINUTES using ALTER DATABASE SET Options (Transact-SQL).
sys.query_store_runtime_statsAggregated runtime statistics for executed plans. All captured metrics are expressed in form of 4 statistic functions: Average, Minimum, Maximum, and Standard Deviation.



Query Store Stored Procedures

==========================

1.sp_query_store_force_plan  -->         Flushes the in-memory portion of the Query Store data to disk.

2.sp_query_store_remove_query -->    Removes the query, as well as all associated plans and runtime   stats from the query store.

3.sp_query_store_unforce_plan -->     Enables unforcing a particular plan for a particular query.
4.sp_query_store_reset_exec_stats --> Clears the runtime stats for a specific query plan from the query store.
5.sp_query_store_remove_plan  -->    Removes a single plan from the query store.

6.sp_query_store_reset_exec_stats --> Clears the runtime stats for a specific query plan from the query store.



Different Alter commands to Manage Query store

======================================= Use the ALTER DATABASE statement to enable the query store

USE [master]
GO
ALTER DATABASE [AdventureWorksDW2016_EXT] SET QUERY_STORE = ON
GO


ALTER DATABASE [AdventureWorksDW2016_EXT] SET QUERY_STORE (OPERATION_MODE = READ_WRITE)
GO


If this option is set to lower value than the frequency of flushes, data from memory to disk will occur often which will have negative impact on performance of SQL Server instance. But If the value is increased, more SQL Server Query Store information will be placed in memory of the SQL Server instance before it flushes to disk, which increases a risk of losing that data in case of SQL Server restart/crashes.

Below is the T-SQL code for setting the Data Flush Interval (Minutes) option:

USE [master]
GO
ALTER DATABASE [AdventureWorksDW2016_EXT] SET QUERY_STORE (OPERATION_MODE = READ_WRITE, DATA_FLUSH_INTERVAL_SECONDS = 1800)
GO

Q:-
The Query Store uses a DATA_FLUCH_INTERVAL_SECONDS option for controlling the memory buffers is flushed to the database.

What is the Default interval for DATA_FLUCH_INTERVAL_SECONDS?

900 Sec
100 Second
700 second
400 second

Ans : 900 Sec

Max Size (MB): Specifies the limit for the data space that Query Store will take inside your database. This is the most important setting that directly affects operation mode of the Query Store.
While Query Store collects queries, execution plans and statistics, its size in the database grows until this limit is reached. When that happens, Query Store automatically changes the operation mode to read-only and stops collecting new data, which means that your performance analysis is no longer accurate.+
The default value (100 MB) may not be sufficient if your workload generates large number of different queries and plans or if you want to keep query history for a longer period of time. Keep track of current space usage and increase the Max Size (MB) to prevent Query Store from transitioning to read-only mode. Use Management Studio or execute the following script to get the latest information about Query Store size:

USE [master]
GO
ALTER DATABASE [AdventureWorksDW2016_EXT] SET QUERY_STORE (OPERATION_MODE = READ_WRITE, MAX_STORAGE_SIZE_MB = 200)
GO

Statistics Collection Interval: Defines level of granularity for the collected runtime statistic (the default is 1 hour). Consider using lower value if you require finer granularity or less time to detect and mitigate issues but keep in mind that it will directly affect the size of Query Store data. Use SSMS or Transact-SQL to set different value for Statistics Collection Interval:



USE [master]
GO
ALTER DATABASE [AdventureWorksDW2016_EXT] SET QUERY_STORE (OPERATION_MODE = READ_WRITE, INTERVAL_LENGTH_MINUTES = 10)
GO


Arbitrary values are not allowed for INTERVAL_LENGTH_MINUTES. Use one of the following: 1, 5, 10, 15, 30, 60, or 1440 minutes.


Size Based Cleanup Mode: Specifies whether automatic data cleanup will take place when Query Store data size approaches the limit.+
It is strongly recommended to activate size-based cleanup to makes sure that Query Store always runs in read-write mode and collects the latest data.


USE [master]
GO
ALTER DATABASE [AdventureWorksDW2016_EXT] SET QUERY_STORE (OPERATION_MODE = READ_WRITE, SIZE_BASED_CLEANUP_MODE = ON)
GO


Stale Query Threshold (Days): Time-based cleanup policy that controls the retention period of persisted runtime statistics and inactive queries.
By default, Query Store is configured to keep the data for 30 days which may be unnecessarily long for your scenario.+
Avoid keeping historical data that you do not plan to use. This will reduce changes to read-only status. The size of Query Store data as well as the time to detect and mitigate the issue will be more predictable. Use Management Studio or the following script to configure time-based cleanup policy:

USE [master]
GO
ALTER DATABASE [AdventureWorksDW2016_EXT] SET QUERY_STORE (OPERATION_MODE = READ_WRITE, CLEANUP_POLICY = (STALE_QUERY_THRESHOLD_DAYS = 300))
GO



or collective

ALTER DATABASE <database name>   
SET QUERY_STORE (  
    OPERATION_MODE = READ_WRITE,  
    CLEANUP_POLICY = (STALE_QUERY_THRESHOLD_DAYS = 30),  
/*
--- Configure the STALE_QUERY_THRESHOLD_DAYS argument to specify the number of days to retain data in the query store. The default value is 30. For SQL Database Basic edition, default is 7 days.
*/
    DATA_FLUSH_INTERVAL_SECONDS = 3000,  
/*
Determines the frequency at which data written to the query store is persisted to disk. To optimize for performance, data collected by the query store is asynchronously written to the disk. The frequency at which this asynchronous transfer occurs is configured via DATA_FLUSH_INTERVAL_SECONDS. The default value is 900 (15 min).
*/

    MAX_STORAGE_SIZE_MB = 500,  
/*
Configures the maximum size of the query store. If the data in the query store hits the MAX_STORAGE_SIZE_MB limit, the query store automatically changes the state from read-write to read-only and stops collecting new data. The default value is 100 MB. For SQL Database Premium edition, default is 1 GB and for SQL Database Basic edition, default is 10 MB.
*/

    INTERVAL_LENGTH_MINUTES = 15,  
/*
Determines the time interval at which run time execution statistics data is aggregated into the query store. To optimize for space usage, the runtime execution statistics in the Runtime Stats Store are aggregated over a fixed time window. This fixed time window is configured via INTERVAL_LENGTH_MINUTES. The default value is 60.
*/

    SIZE_BASED_CLEANUP_MODE = AUTO,  
/*
Controls whether the cleanup process will be automatically activated when total amount of data gets close to maximum size. Can be AUTO (default) or OFF.
*/
    QUERY_CAPTURE_MODE = AUTO,  
/*
Designates if the Query Store captures all queries, or relevant queries based on execution count and resource consumption, or stops adding new queries and just tracks current queries. Can be ALL (capture all queries), AUTO (ignore infrequent and queries with insignificant compile and execution duration) or NONE (stop capturing new queries). The default value on SQL Server (from SQL Server 2016 to SQL Server 2017) is ALL, while on Azure SQL Database is AUTO.
*/

    MAX_PLANS_PER_QUERY = 1000,
/*
An integer representing the maximum number of plans maintained for each query. The default value is 200.
*/
    WAIT_STATS_CAPTURE_MODE = ON  -- it only works on sql server 2017
/*
Controls if Query Store captures wait statistics information. Can be OFF or ON (default).
Query the sys.database_query_store_options view to determine the current options of the query store. For more information about the values, see 
*/
);


Q.Which Query Store view contains Contains information about Query Store options for this database:-

 1.sys.database_query_store_options
 2.sys.query_context_settings
 3.sys.query_store_plan 
 4.sys.query_store_query 
 5.sys.query_store_query_text
 6.sys.query_store_runtime_stats
 7.sys.query_store_wait_stats 
 Ans-: 1

Q.If Query Store fills up and becomes read-only, after increasing the max size, what else must be done to allow new data to be captured?

1.The Desired State must be set to 'ON'
2.The Operational Mode must be set to 'READ-WRITE' 
3.Nothing further needs to be done.


Ans-: 2.


Which of the Query Store built in reports would be the best to use when ensuring that an upgrade won't result in slow queries?

1.The Top Resource Consuming Queries report

2.The Tracked Queries report

3.The Regressed Queries report

4.The Overall Resource Consumption report

Ans: 3


The following table explains when to use each of the Query Store views:
SSMS viewScenario
Regressed QueriesPinpoint queries for which execution metrics have recently regressed (i.e. changed to worse).
Use this view to correlate observed performance problems in your application with the actual queries that needs to be fixed or improved.
Overall Resource ConsumptionAnalyze the total resource consumption for the database for any of the execution metrics.
Use this view to identify resource patterns (daily vs. nightly workloads) and optimize overall consumption for your database.
Top Resource Consuming QueriesChoose an execution metric of interest and identify queries that had the most extreme values for a provided time interval.
Use this view to focus your attention on the most relevant queries which have the biggest impact to database resource consumption.
Queries With Forced PlansLists previously forced plans using Query Store.
Use this view to quickly access all currently forced plans.
Queries With High VariationAnalyze queries with high execution variation as it relates to any of the available dimensions, such as Duration, CPU time, IO, and Memory usage in the desired time interval.
Use this view to identify queries with widely variant performance that can be impacting user experience across your applications.
Tracked QueriesTrack the execution of the most important queries in real-time. Typically, you use this view when you have queries with forced plans and you want to make sure that query performance is stable.



The Query Store uses a DATA_FLUCH_INTERVAL_SECONDS option for controlling the memory buffers is flushed to the database.

What is the Default interval for DATA_FLUCH_INTERVAL_SECONDS?

900 Sec
100 Second
700 second
400 second

Ans :