A stored procedure is unique in that it is able to control its execution context via the "environmental" SET settings. (Since a trigger is a special kind of stored procedure, can a trigger change its SET settings?). For example, we can set TRANSACTION LEVEL at the beginning of the stored procedure to control all statements in it and to provide the default for its inner stored procedures if any.
According to BOL,
- The scope of the settings SET in the stored procedure is the stored procedure itself and those settings do not affect the calling (execution) environment.
- Settings not explicitly set in the stored procedure default to those set at the execution time except for two special settings: ANSI_NULLS and QUOTED_IDENTIFIER.
- Those two settings at the creation time of the stored procedure (implicitly) exclusively control the respective behavior, ignoring the corresponding settings at the execution time or the settings SET within the stored procedure. That is, those two settings are constant for the stored procedure after its creation, and to change them, it has to be altered or dropped and created with different settings.
We can find out the two settings at the time of stored procedure creation by querying:
SELECT OBJECTPROPERTY(OBJECT_ID('spName'), 'ExecIsQuotedIdentOn');
SELECT OBJECTPROPERTY(OBJECT_ID('spName'), 'ExecIsAnsiNullsOn');
Why are those two settings special?
QUOTED_IDENTIFIER is actually a parse-time setting and the parser needs to determine if a statement like SELECT * FROM "table" is valid or not based on the setting. It makes sense that for consistency, the same behavior applied during the parsing is adopted at the time of execution without regard to any different setting. (Other parse-time settings such at FIPS_FLAGGER, OFFSETS, PARSEONLY, do not affect the execution results).
ANSI_NULLS setting can directly affect query results. It is understandable that this settting deserves the consistency treatment. The question is why other settings do not have the same special treatment. Specifically, CONCAT_NULL_YIELDS_NULL also affect query results. The optimizer needs to know these other settings when it is looking for the "best" query plan. So, SQL's resolution is that each cached plan
for a stored procedure has these setting included as part of the plan. Any change in them in the execution context will cause the optimizer to recompile and generate another plan.
Another question that has been in my mind for quite some time and still has no clear answer: what execution context is an activation stored procedure in? Since there is no explictly created session to run such a procedure, will the settings defaults to the db level settings (except for the two special settings (QUOTED_IDENTIFIER and ANSI_NULLS ), of course - they were set at the creation time) . But the default db settings are all off. Is there a risk here?
The same applies to other client apps (besides the native sqlclient used by SSMS) such as ADO.Net. BOL seems to imply that ODBC and OLEDB have automatic settings.
No comments:
Post a Comment