Monday, 29 January 2018

Managing SQL Server Transaction Log File

Overview

In our previous section we have discussed how to create an audit trigger in SQL Server and how it helps for database security in different aspects and also knows how harmful it is for a database. Now, in this section, we will discuss SQL Server log file and its uses with its different components. It is not an easy task to tackle with database problems. If you are the database administrator, you need to know all about database.

Luckily, SQL Server creates various different log files which can help us to solve various problems like setup issues and application errors. Let’s discuss SQL Server Log File.

What is a Log File in SQL Server?

A log file is a database file in SQL Server, which stores all the log information and uses .ldf extension. The information that a log file stores can be helpful at the time of data loss or recovery. A log file stores all that events which are being executed on a database. The size of a log file depends on its logging level that a user sets in his database. It is necessary to have at least one log file in SQL Server database. The main reason for using a log file in SQL Server is to restore the database if a system failure occurs.

SQL Server Transaction logs

Every modification in SQL Server count as a transaction and these transactions are stored in a transaction log file in a database. The transaction logs are very important part of SQL Server database because they can be helpful in various circumstances like forensics of the database. For example: if a third person tempers on a person database, in that case, the transaction logs can be used to track what changes that third person has made in the database. Also, if a system failure occurs, the transaction logs can be required to bring the database back to a consistent state.

How to manage the size of a log file?

However, the logging level affects to log file’s size. So it can be useful to physically shrink or expand the log file of a SQL Server database. To maintain the size of a log file you must be truncate the transaction log on a regular basis to keep it from filling up. However, it can be possible to delay some factors such as log truncation.So it is important to monitor log size. Also, you need to avoid some operations that are not more useful for your database, but the impact of those operations increasing the size of your log file.

For managing the size of a log file we need to perform some basic operations that are mentioned below:

  • Monitor Log Space Use
  • Shrink log file size
  • Add or Enlarge a Log File
  • Optimize the Size of the tempdb Transaction Log
  • Control the Growth of a Transaction Log File

Monitor Log Space Use: The log space use can be monitored by using DBCC SQLPERF. This command is useful and returns the information about the log space amount that is currently used also it indicates if the transaction log is filling up and is need of truncation. It is recommended that to avoid overloading the log disk.

Shrink Log File Size: The shrinking log file operation applied when you want to reduce the physical size of a physical log file. This is helpful when you don’t need unused space and you know the transaction log file containing unused space. This occurs only if the database is online, and at least one virtual log file is free. However, in some cases, shrinking the size of the log file may not be possible until after the next log truncation.

Add/Enlarge Log File: Another method to gain extra space is to enlarge the existing log file, if the extra disk space is available or you can add secondary log file to your database on a different disk.

  • To add another log file to your database, you can use ADD LOG FILE clause of the ALTER DATABASE statement. This allows the log to grow.
  • To enlarge the size of the log file you can use MODIFY FILE clause of the ALTER DATABASE statement, this will specify the SIZE and MAXSIZE syntax of the log file.

Optimize tempdb Transaction Log Size: In this method you need to restart your SQL Server instance, this will resize your tempdb transaction log to its original size and reduces its performances. Also, this can be avoided by increasing its size after starting or restarting the SQL Server instance.

Control Transaction Log File Growth: To control transaction log file growth, the ALTER DATABASE command can be used to manage the growth of a transaction log file. Note the following:

  • Use, size option to change the existing file size
  • Use FILEGROWTH option to change the growth increment.
  • Use MAXSIZE option to set growth to UNLIMITED.

Log File Operations when Transaction log growing

It is possible that an SQL Server can have more than one log file, And Adding more than one transaction log files will not change the performance of the SQL Server database. Modification can be done only into one file at the time.

Having more than one transaction log files is recommended when the first one is full or if the disk drive is running out of space.

While making the changes in the database, the logging level is automatically growing. In such situations, it is highly recommended to maintain this logging level by applying one of three recovery models.

  • Simple recovery Model- In Simple recovery model, those Operations are not supported which require transaction log backups
  • Full recovery Model- This recovery model requires log backups and the work is affected due to a damaged data file.
  • Bulk-logged Model- Requires. This recovery model is an auxiliary model of the full recovery model which requires log backups, that allows high-performance bulk copy operations.

So these three recovery models are responsible when a transaction log file is growing.

Conclusion:

After reading about SQL Log File, we know, how a SQL Server log file is important for the database, which saves all the transactions that are performed in a database and also If a database going to a consistent state, then, the only log file comes into existence. However, it is also possible that if a power failure occurs the log file can also be corrupt. So, in such scenario the SQL Server log file analyzer tool is highly recommended, which helps, you can read and analyze your SQL Server log file.

Monday, 22 January 2018

Learn Different Options for Importing Data Into SQL Server

Most of the time users need to import their data into SQL Server for its proper management. Select from the variation of source as well as destination data source types, choose tables for copying or even for specifying own query for data extraction and store the work as SSIS package. SQL Server provides other different options for importing data into SQL server just like BCP, Open query, Open row set, Bulk insert. In the following write-up, we have discussed these different ways in details.

Import Using BCP

The BCP utility is a command-line tool, which utilizes the Bulk Copy Program API to bulk copy data between the instance and data files of SQL Server. By utilizing this, you can easily export all the data files from SQL Server database into data file and vice versa and create format files, which supports importing as well as exporting operations.

To utilize BCP utility for performing these tasks, run a BCP command in Command Prompt Windows by using the following syntax:

bcp {table|view|"query"}
    {out|queryout|in|format}
    {data_file|nul}
    {[optional_argument]...}

BCP command needs three arguments. The first one is (table|view|“query”) that represents source data as well as destination in SQL Server database. Utilize BCP utility for exporting data from table or preview via a query. If there is some specific query then, must surround it in quotation marks.

The other argument in BCP command is (out|queryout|in|format), which determines the command mode. While running BCP command, you must state one of the mentioned four modes as stated:

  • out: The command transfers data from table or preview into data file.
  • queryout: This exports data that is recovered via a query into data file.
  • in: This command imports data from data file into table or preview.
  • format: This command makes a format file based on a table.

The third argument in BCP command (data_file|nul) is full path of data file or when data file should not be stated, the Null value. If you are importing data then, you must state the file, which contains source data.

Import BULK INSERT

The BULK INSERT statement that is a Transact-SQL statement that permits to bulk-load data file into SQL Server database. There are several examples of BULK INSERT statements for copying data into Server table. 

OPENROWSET

There are various of situation arises where users need to run an ad hoc query, which recovers the data from remote OLE DB data source and loads bulk of data into SQL Server table. In such a situation, users can utilize OPENROWSET function in T-SQL for passing string connection and query to the data source in a manner for recovering the relevant data. It can be utilized from the OPENROWSET function for recovering the data from any source, which supports registered OLD DB provider like remote instance of server.

OPENDATASOURCE

OPENDATASOURCE functions makes easy for users in getting an information about ad hoc connection as a part of four-part object name as one time linked server. There is no need for specifying or the creation of linked server to query other sources if it is preferred infrequently.

OPENQUERY

The OPENQUERY command is mainly utilized for the initiate an ad-hoc distributed query by simply using linked server. It is begun by stating OPENQUERY as table name in a way of clause. It opens linked server, then performs a query as if implementing from that server.

While implementing queries as well as receiving data directly in similar way is not bad, there are effects while joining the results of OPENQUERY via local table. Typically joining remote tables via local tables across this network is not an efficient way of querying. In some circumstances, it may be better to subscribe a remote table by importing it locally after that joining it locally.

Conclusion

Importing of the data is quite important for the proper management of data in an appropriate way. Therefore, in the above discussion, we have discussed different options for importing data into SQL Server.

Friday, 19 January 2018

What Does REPAIR_ALLOW_DATA_LOSS in SQL Server Signify?

REPAIR_ALLOW_DATA_LOSS is the repair level, which is recommended by DBCC CHECKDB when the database is found in a corrupted state.SQL users and even administrators at times get confused about the real functioning of this command. It is to be noted that it contains the term Repair, which signifies that the database in concern will be repaired whilst deleting the corrupted data residing in the database.Repair means that the data will not be simply fixed but will be brought in a consistent state, even if it requires deleting some data.

DBCC CHECKDB is accompanied by the REAIR_ALLOW_DATA_LOSSbecause issues apart from minor issues of non-clustered index, requires deletion of some data for the purpose of its repairing.If a corrupt record is found on the data page, the command will end up in deleting the entire page inclusive of all other records for the sake of fixing the corruption issue. Therefore, the developers have carefully added the term “ALLOW_DATA_LOSS” after the term REPAIR in order to make it clear to the users that the repair process will be done while allowing some data loss.

Repair & Data Deletion

Even experienced SQL administrators at times, get confused about what REPAIR_ALLOW_DATA_LOSS actually does. It is to be noted that the sole purpose of repair is not to save the user data. On the contrary, repair is analogous to the fact that some data is bound to get deleted in the whole process.

The repair command is used to bring the database in a consistent state structurally in a correct way and that too without any delay. Therefore, the repair process is engineered in a way that it will work efficiently in every circumstance in a fast and reliable manner.It simply deletes the data that is corrupted along with the other data items to which it's linked, no matter if it’s a record or a page.The repair process only deals with the physical structure of the database and has no concern about the high-level logical structure.

The end-result may be a little different owing to the deletion that is done. However, the consistency of the database structure will be the same as it was before corruption.

When DBCC CHECKDB command is run, a recommendation stating the minimum repair option is provided to the user. The message may look like this:

CHECKDB found 0 allocation errors and 10 consistency errors in database 'projectdb'.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB

The recommendation provided is the minimum level of repairing that is required to fix the database. Moreover, not all the errors that are reported require this repair level.

Conclusion

The best solution for fixing a SQL Server database without fearing loss of data is the restoration of data from a healthy and latest backup. This will enable the users to restore the entire data without doing any compromises with the integrity of the database or if you don't have a healthy backup then you can try alternate solution like third-party tool and bring your database back.

Wednesday, 17 January 2018

How to Run DBCC CHECKDB to Check SQL Database Integrity

Database Console Command CHECKDB (DBCC CHECKDB)is used to check the integrity (physical & logical) of objects in a SQL Server database.The command is supported in databases that contain memory-optimized tables but the validation is only supported in disk-based tables. The DBCC repair option is unavailable in memory-optimized tables and therefore, leads to the need of regular database backup.In case an issue arises in a memory-optimized table, the data can be restored from the last backup done.

The various operations that are performed by CHECKDB are:

  • DBCC CHECKALLOC execution on SQL database.
  • DBCC CHECKTABLE execution on each table and view of SQL database.
  • DBCC CHECKCATALOG execution on SQL database.
  • Validation of content in the indexed view of the database.
  • Validation of link-level consistency amidst the file directories and the table metadata.
  • Validation of service broker data.

Since DBCC CHECKDB runs all the other commands it will not be necessary to run CHECKALLOC, CHECKTABLE and CHECKCATALOG commands separately.

DBCC CHECKDB Syntax

The arguments used in the above syntax signify the following:

database_name | database_id | 0
This is the name of the database.In case the name is not signified or only 0 is written, then the current database is being used.
NOINDEX
It specifies that the integrity check of non clustered index tables should not be performed. This leads to decrease of complete execution time.This syntax does not affect the data residing in the tables.
REPAIR_ALLOW_DATA_LOSS
This syntax repairs the errors that are found in the database.This can lead to loss of some data.

Note: This syntax is supported by Microsoft and does not every time prove to be an ideal solution for turning the database in a good physical state. This is because it deletes the entire data that is found to be corrupted and can lead to more data loss, than was originally done to the database.Therefore, it should be adopted as a last resort.

REPAIR_FAST
This argument only maintains backward compatibility and does not perform any repairing.
REPAIR REBUILD
This argument includes faster repairing process which does not impose threat of any data loss.
ALL_ERRORMSGS
This shows all the errors that are generated in all the objects. Including or excluding this syntax will not have any effect as error messages are usually sorted by the object ID. The maximum number of error messages that are generated can reach up to 1000.
EXTENDED_LOGICAL_CHECKS
This argument runs a logical consistent check on views and indexes, only if the compatibility level is 100.
NO_INFOMSGS
It removes all the informational messages.
TABLOCK
This syntax obtains an exclusive lock on the database and will increase the speed of DBCC CHECKDB on a database at times of heavy load. But it decreases the availability of the database for concurrent operations.
ESTIMATEONLY
This specifies or estimates the amount of space the database would require to run CHECKDB command.
PHYSICAL_ONLY
This puts a limitation for checking only the physical structure of the database. A short overhead check of the physical database is accompanied by detection of torn pages, failures and common problems faced by users.
DATA PURITY
This syntax checks for column values that are either out of range or are not valid. Integrity checks of column-value are enabled by default and do not need DATA_PURITY syntax.

Things To Be Kept In Mind

  • Disabled indexes cannot be checked by DBCC CHECKDB.
  • The user-defined and byte-ordered types need to be serialized if DBCC CHECKDB needs to be executed. In any other case, error 2537 occurs.
  • DBCC CHECKDB cannot be directly run on Resource database as it can be modified in single-mode only.

Error Messages Generated By DBCC CHECKDB

When the CHECKDB command is finished running, a message is being written to the SQL error log. In case of success, it generates message indicating success and the total time for which the command ran. In case of failure, the process is terminated due to the occurrence of some error, as indicated by a message. The various state values which represent the error message are:

Error Report

Whenever corruption is detected by CHECKDB command, a dump file named SQLDUMPNNNN.txt is created in the log directory of SQL server. In case the Feature Usage Data Collection and Error Reporting are enabled in SQL, the error report is sent to Microsoft for improvement purposes.

Database Restoration

In the scenario of error generation in SQL server, it is recommended to restore the database from the last created backup instead of repairing the database.In case no backup exists, you can go for repair options. But opting repairing with REPAIR_ALLOW_DATA_LOSS can lead to deletion of some data.

Alternative Resolution For Database Recovery

Basically the DBCC CHECKDB command checks the consistencies of the database, including physical or logical. This command check the pages, index and some other components of the SQL server database, but at some critical points this opt to refuse to recover the SQL database. In case of absence of backup of MS SQL server database, the opted repair options can delete an appreciable amount of data.Therefore, in order to recover the database without compromising with the data integrity, you can opt for Microsoft SQL database recovery solution. They guarantee complete recovery of database without deleting any amount of data.

Monday, 15 January 2018

Steps to Configure SQL Server to Listen On A Specific TCP Port

Management of the database is very much essential in today’s date and for doing this, there is a need to have a platform. SQL Server is one such platform to manage the database in a relational manner. It was developed by Microsoft and operated by many users, especially in different organisations, for the management of data. This database server is a software, which is for storing as well as for retrieving data. Moreover, SQL server has two port allocation one is static and another is dynamic. The default or static SQL Server listens on a fixed TCP port i.e., 1433. However, a demand occurs in which the server users want to configure a server to listen on a specific TCP port (SQL server configuration manager). In the following section, we will discuss the difference between a Static and Dynamic port and then learn how to configure a server to listen on a specific TCP port?

Difference Between Static and Dynamic Port

If a user configure an instance of SQL Server for the use of Static port, this server listens on static port that is previously specified. By default, SQL Server listens on TCP port 1433. SQL Server clients will have to send all the requests specifically to the static port, which is listen by SQL Server. However, if the server is configured for the static port or for any another program running on the system, then working of the defined static port gets started. Now, SQL Server will not listen on that specific static port. This type of port can be changed into Dynamic port, as per requirement.

When SQL Server is configured for other TCP port (which is different from the default one) then it is called Dynamic port. When a user starts the instance of SQL Server in Dynamic port allocation process, the port is already set to “0”. This is the reason due to which SQL Server requests free port value from an operating system. Randomly, a user can put any port number to the SQL Server and then the server starts listening on allocated port.

Need to Configure a Server to Listen on a Specific TCP Port

Following are the reasons due to which users alter the default port number:

  • Users change this port settings mainly due to security purposes.
  • The another reason to change Static to Dynamic is because of requirement of a client application.

Method to Change SQL Server Static Port to Dynamic

Following are the steps, which a user needs to follow for changing the SQL Server port from Static to Dynamic:

  • Firstly, you need to run the SQL Server Configuration Manager on your system
  • Then, click on “SQL Server Network Configuration”
  • Now, choose an instance from the list, which you want to configure for listening on a specific port.
  • Once you make this choice of changing the port, make a right-click on the TCP/IP protocol. After that, select “Properties” option
  • Click on the tab where IP address is given
  • Now, you need to mention the port number, which you want to put instead of 1433. You can use any number, as per your choice.
  • Finish the task by choosing “SQL Server Services >> SQL Server and restart the machine

Note: You first need to remove “0” for turning off the dynamic port number.

Conclusion

SQL Server is very useful means when it comes to management of bulk amount of data either for an organisation or for a user. It manages the database in a well-structured form. In this write-up, we have discussed the difference between dynamic and static ports along with different needs to change port from static to dynamic. We have also learned that how to configure a server to listen on a specific TCP port.

Friday, 12 January 2018

Alwayson Availability Groups vs Alwayson Failover Cluster Instances

SQL Server is one of the finest database management system deployed by organizations for storing and retrieving their data. It facilitates the function of data storing and retrieving by the applications which either run on the same or different platforms. The dominant reason behind the deployment of SQL Server on a large scale is the huge list of features that it offers. One such feature is AlwaysOn offering. In this blog, we will put forth discussion on AlwaysOn Availability Group & AlwaysOn Failover Instance.

AlwaysOn Availability Group

The AlwaysOn Availability Groups is a disaster recovery solution is used for providing alternative solution to database mirroring at the enterprise-level. It maximizes the availability of the databases belonging to an enterprise. It supports two databases-Primary and Secondary. While the primary database is a set of read and write primary databases, the secondary databases are the corresponding databases of one to eight sets associated with primary database.

Benefits of Alwayson Availability Groups:

The list of benefits offered by Always On Availability Group goes as follows:

  • It extends support to provide nine availability replicas. Since availability replica is used to maintain a local copy of all the databases, they prove to be very crucial in ensuring database availability at the time of data disasters.
  • It offers two availability modes-Asynchronous and Synchronous commit mode.
  • It supports variant failovers like planned manual failover, automatic failover and forced failover.
  • Enables users to configure availability replica for supporting one or both the active-secondary capabilities, as per the user’s requirement.

AlwaysOn Failover Cluster Instance

SQL Server AlwaysOn Failover Cluster Instance or FCI, a substantial offering of AlwaysOn, provides high availability at the server-instance level by making use of Windows Server Failover Clustering. It is a single instance of SQL, which is installed across Windows Server Failover Cluster nodes. In a network, FCI seems to be an instance, which runs on a single computer. However, in real it provides failover to the other node in the scenario if one WSFC node is unavailable.

In the instance of any hardware or software failure of the SQL Server, the clients and the applications connected to it, experience downtime. The high availability of the instances of SQL Server is safeguarded by the redundant node, whenever the instances are configured to be a Failover Cluster Instance.

Benefits of AlwaysOn Failover Cluster Instance:

  • The feature of redundancy provides protection of the SQL Server instances.
  • Occurrence of automatic failover whenever any failure like OS failure, hardware failure or application failure occurs.
  • Provides a support to a broad range of WSFC cluster disks, storage solutions, etc.
  • It does not require reconfiguration of the clients and applications associated with the Server instances during failovers.

Conclusion

AlwaysOn Availability Group and AlwaysOn Failover Instance are the two provisions that enable the users to increase the availability of SQL Server instance. Both the provisions have their associated advantages and limitations and thus the organizations should make use of either of them according to their requirement.