Saturday, November 23, 2024

New features - SQL 2014_17_19

New features 

 

SQL 2014 

 

  • SQL Server In-Memory OLTP Overview 

SQL Server 2014 includes an In-Memory OLTP engine code named Hekaton. This engine provides a lock and latch free environment for OLTP workloads. It is fully integrated into SQL Server and accessed using standard T-SQL. Contrary to other products in the market, Hekaton is not a separate system, it is part of the SQL Server Database Engine.  Hekaton enables you to use both disk based tables and Memory-Optimized Tables together in the same queries and stored procedures. 

Memory-Optimized tables can be defined as durable, so data persists on server restart and with schema only duration to preserve table definition alone, useful for ETL transformations and data staging. 

Furthermore, Hekaton introduces Natively Compiled Stored procedures which are Transact-SQL Stored Procedures compiled to native code, to interact with Memory-Optimized Tables even more efficiently. 

 

  • SQL Server 2014 Cloud Computing Enhancements 

Since this version of SQL Server was designed with the premise of being a platform for a Hybrid Cloud it has some new and exciting features.  

An on-premises SQL Server can have databases in which its data and log files are stored on Windows Azure Storage. This means that you can move your storage into the cloud while keeping all the Transaction processing on your local server. Furthermore you can enable Transparent Data Encryption on databases while keeping the encryption key on the local server for added security. You can deploy a SQL Server Database to a Windows Azure Virtual Machine with a few clicks with the SQL Server Management Studio Deploy a SQL Server Database to a Windows Azure Virtual Machine Wizard. This release also includes the possibility to Backup and Restore to/from a URL directly with SQL Server Management Studio. 

 

  • Delayed Durability  

By default, SQL Server works as follows: 

Unless the relevant record in the log cache is physically written to the transaction log file(this is known as log flush), the record that has changed in the buffer cache is not written physically to the data file. This protect mechanism called as Write Ahead Logging. Therefore, if the data is not written to the ldf file, the completion information is not transmitted to the user. Thus, in case of any crash, sql server can recover the data from the log file. This ensures that your data will not be lost. 

With Delayed Durability, we can change this behaviour at the database level, at the commit process level or at the atomic block level. This means that the SQL Server informed the user that the process is completed before writing data to the log file. 

NOTE: If you enable delayed durability at the database, transaction, or atomic block level, your performance will increase, but there is a risk of data loss. 

 

  • WAIT_AT_LOW_PRIORITY option.   

This option enables you to specify operation locks time maximum duration and abort conditions.  Here is some additional information: 

Argument 

Description 

MAX_DURATION 

Is the wait time in minutes the online operation will wait at low priority 

ABORT_AFTER_WAIT 

These are the actions to be taken by the online operation when it is blocked beyond MAX_DURATION value. 

NONE: Continue waiting for the lock with normal priority. 

SELF: Leaves current operation without taking any action. 

BLOCKERS: Kills transactions that block the online operation. 

 

  • Incremental statistics 

A major problem with updating statistics in large tables in SQL Server is that the entire table always has to be scanned, for example when using the WITH FULLSCAN option, even if only recent data has changed. This is also true when using partitioning: even if only the newest partition had changed since the last time statistics were updated, updating statistics again required to scan the entire table including all the partitions that didn’t change. Incremental statistics, a new SQL Server 2014 feature, can help with this problem. 

Using incremental statistics, you can update only the partition or partitions that you need and the information on these partitions will be merged with the existing information to create the final statistics object. Another advantage of incremental statistics is that the percentage of data changes required to trigger an automatic update of statistics now works at the partition level which basically means that now only 20% of rows changed (changes on the leading statistics column) per partition are required. Unfortunately, the histogram is still limited to 200 steps for the entire statistics object in this version of SQL Server. 

  • Buffer Pool Extension 

Introduced in SQL Server 2014, the Buffer Pool Extension (BPE) came to build up a new layer of cache, helping to maintain the clean pages accessible with less effort. When a page is copied from the disk into the memory, its first state is always Clean. The BPE feature extends the buffer pool cache with nonvolatile storage (usually SSD – Solid State Drive). Because of BPE, the buffer pool can force the paging of I/O between RAM and the SSD. Because of the lower latency and better random I/O performance of SSD, the buffer pool extension significantly improves I/O throughput. More details about BPE: 

  • Buffer Pool Extension is a SQL server 2014 and up new feature. 

  • This feature is supported also in the Standard Edition of SQL Server. 

  • Buffer Pool Extension is only supported in 64 bit servers. 

  • You can enable and disable the feature without interfere in the instance’s availability – No restart is required. 

  • Buffer Pool Extension is a transparent solution – No application change is needed. 

  • Buffer Pool Extension deals with Clean Pages only – No possibility of data loss. 

  • The objective is to improve the OLTP like workload systems; anyway systems with a very high number of writes may not take advantage of this feature. 

  • Enable the Buffer Pool Extension in systems with the In-Memory working is a good practice, as both technologies are complimentary. 

  • The Buffer Pool Extension is nothing more than a file, placed in the fastest disk. 

 

 

  • SQL Server Resource Governor  

was introduced in SQL Server 2008. This feature is used to control the consumption of the available resources, by limiting the amount of the CPU, Memory and IOPS used by the incoming sessions, preventing performance issues that are caused by resources high consumption. 

The Resource Governor simply differentiates the incoming workload and allocates the needed CPU, Memory and IOPS resources based on the predefined limits for each workload. In this way, the SQL Server resources will be divided among the current workloads reducing the possibility of consuming all resources by single workload type, while competing for the available resources. A minimum resources limit can be also specified in the Resource Governor, which allows you to set the proper resource level for each workload type. 

The Resource Governor feature is very useful when you have many databases hosted in your SQL Server and many applications connecting to these databases. These connections are competing for the available SQL Server resources, affecting each other’s performance. Using the Resource Governor feature will overcome this kind of performance issue. 

There are three main components that form the Resource Governor;Resource Pools, Workload Groups and the Classifier.  

 

 

SQL 2017 

 

  • SQL Server 2017 on Linux 

Clearly, two of the biggest changes with SQL Server 2017 are its support for Linux and Docker containers, both of which could have a big impact on the database trends of the future. The release of SQL Server 2017 for Linux has proven to be a step towards extending the capabilities to a high-performance platform, which has already set some TPC benchmark records. 

 

  • Python support in ML (Machine Learning) 

In SQL Server 2017, Python is integrated, and now under the same umbrella of the feature called machine learning services. As Python is being a common programming language adopted by data scientists and database administrators, the ability to run Python code as T-SQL script enables the machine learning capabilities, directly when it comes to dealing with large amounts of data.  Further, computations are performed on the server itself without having to first transfer the data to a client, thereby placing a large load on network traffic. And because the data stays in place, you can take full advantage of the performance benefits brought by SQL Server technologies such as in-memory tables and column-store indexes. Python code is also very easy to deploy and can be written directly inside of any Transact-SQL command. 

 

  • Graph Databases 

Graph databases are used to show the hierarchy or relations (such as employees and managers data). There are many workaround ways to display the relationship (such as using Recursive CTE) but that’s still a workaround. Having said that, the SQL graph feature is really capable of drilling through many levels of relationships with ease. The design of the model and the execution of query has made the process much simpler and seamless, and thereby, efficient. The amount of coding has been brought down significantly. 

 

  • Resumable Online Index Rebuild 

SQL Server 2017 is the first commercial RDBMS to support pause and resume functionality for index maintenance operations. Most administrators consider index rebuild on VLDB’s a daunting task to manage. Many critical database solutions don’t permit to perform offline database maintenance operations. In most cases, the database design plays an important role in that regard. SQL Server 2017 provides index maintenance tasks with great flexibility as an alternative solution to managing the maintenance operations. There are situations where database administrators might need to temporarily free up system resources. For example, what if a priority task needs resources and a lower-priority index rebuild is eating away the resources? We’d rather have the index rebuild operation run in some other available maintenance window—but at the same time, what if the rebuild was halfway? We do not want to lose that state either. In such a case, the operation can be paused, and resumed during a maintenance window. This can be a termed as “piecemeal index maintenance operation.” 

  1. Resume: Resume an index-rebuild operation, after a failure. 

  1. Pause: Pause the rebuild operation (and resume it at a later point). 

  1. Rebuild: Rebuild large indexes with minimal log space usage. 

 

  • Automatic database tuning 

Monitoring the databases for optimal query performance, creating and maintaining required indexes, and dropping rarely used, unused, or expensive indexes is a common database administration task. SQL Server 2017 can now assist database administrators in performing these routine operations by identifying problematic query execution plans and fixing SQL plan performance problems. Automatic tuning begins with continuously monitoring the database and learning about the workload that it serves. Automatic database tuning is based on Artificial Intelligence; AI is now providing great flexibility in managing and tuning the performance of database systems. 

 

 

SQL 2019 

 

  • Big Data Clusters 

  • memory-optimized tempdb TempDB is one of the biggest sources of latency in SQL Server. Servicing requests for temporary data structures and maintaining metadata about those structures comprises one of the most significant bottlenecks in SQL Server. Memory-Optimized TempDB metadata addresses both of these concerns by allowing TempDB metadata to be moved into memory and greatly improve the performance of any workload that makes the hefty use of temporary data structures 

  • Intelligent performance 

  • Accelerated database recovery will help to recover quickly during the rollback. Its DB level setting and will disabled by default. You can kill a session instantly with very limited rollback time. It maintains slogs to store previous version of the data prior to the modifications state. 

  • Always encrypted with secure enclaves 

  • Data discovery and classification SQL Server Management Studio 17.5 introduced a new feature for data discovery and data classification into different categories. This helps to discover sensitive data such as SSN number, credit card number, bank account details, personal records etc. It is very critical for the data to be secure and compliant under SOX, PCI, and GDPR for each organization. The Classify data features adds extended properties to the columns to specify the label and the information type. This option is available under (Right click on Database) Tasks->Data Discovery and Classification ->Classify Data in SSMS version 17.5 and above. 

  • Kubernetes support 

  • UTF 8 support 

  • Java support 

No comments:

Post a Comment