Tuesday, December 31, 2013

Basic Fact and Dimension Guidelines

I find it difficult to locate a concise definition of what differentiates a fact and a dimension.

Fact Tables
􀂃 For every business process there should be at least one fact table. make a list of nouns describing the process or interaction.
􀂃 Each item in a fact table should have a default aggregation (or derivation) rule--e.g., sum, min, max, semi-additive, not additive. Any complexities in the aggregation method must be documented. Enter the aggregation/derivation information at the end of the definition. Begin a new paragraph and use the following format:
􀂃 The grain, or granularity, of the fact table should be at the lowest level for which a need has been identified and a requirement approved. Performance and storage constraints must also be considered.
􀂃 The grain of all items in the fact table should be the same. If there is a need for aggregation at more than one level, a separate fact table for each level of aggregation may be needed. Note: Aggregation tables are either (a) transparent to the user, such that all SQL is written to go against the lowest level of granularity, or (b) explicit--i.e., seen and queried by the user. Since the dimensional model is, among other things, a tool for communication with the user, it is preferable to include only those tables that the user will see.
Attributes describing the fact should be put in dimensional tables.
Dimension Tables
􀂃 Each dimension table has one and only one lowest level element, called the dimension grain.
􀂃 Dimension tables that are referenced or are likely to be referenced by multiple fact tables are "conformed dimensions." If conformed dimensions already exist for any of the dimensions in the model, their reuse is expected. If new dimensions with potential for usage across the agency are being developed, the design must support anticipated cross-agency needs.
􀂃 Each non-key element should appear in only one dimension table.
􀂃 Most models should have at least one period or time dimension. There may be more than one period dimension. Date and time may be split into two separate dimensions, especially if time is being captured at the hour or minute level.
􀂃 If a dimension table includes a code, in most cases the code description should be included. For example, if branch locations are identified by a branch code, and each code represents a branch name, both the code and the name should be included. An alternative is to include the description and omit the code--e.g., State = California, Status = Active.
􀂃 Generally, there should be no more than twenty dimension tables per fact table; the designer should provide justifications if more than twenty dimension tables are required.
Keys
􀂃 The primary key of a dimension table should be a surrogate key. A source system production key should not be used as a primary key.
􀂃 The primary keys of the dimension tables should be included in the fact table as foreign keys. Together these (and only these) foreign keys make up the fact table primary key (in the logical view).

Thursday, December 26, 2013

Reporting: Summary Data in Detail View

Ever need to have a summary column for the whole record set in each line of a detail record set? If so, here is a way to provide that in a view or report.


DECLARE @TblDetail TABLE
(DetId INT Identity (1,1), SumID INT, Descr Char(2), Num SmallInt)
DECLARE @TblSum TABLE
(SumId INT Identity (1,1) , Descr Char(2))
INSERT INTO @TblSum
(Descr)
VALUES
('AA'),
('BB')
INSERT INTO @TblDetail
(SumID, Descr, Num)
VALUES
(1,'AB',2),
(1,'AC',32),
(2,'CA',2);
WITH SumDetail (SumID, SumNum)
AS(
SELECT
ts.SumID,
SUM(td.num)
FROM @TblSum ts
join @TblDetail td on ts.Sumid = td.sumid
GROUP BY
ts.SumID)
SELECT
td.*,
sd.SumNum Summary
FROM @TblSum ts
join @TblDetail td on ts.Sumid = td.sumid
join SumDetail sd on sd.sumid = td.sumid

Tuesday, December 24, 2013

SQL Unit Tests

This presupposes you have added a unit test project to your solution. If not, create a new project in step 2 as you create the first test.
  1. Visual Studio - SQL Server Object Explorer  - projects node db - Progammability  - Stored Procedures.
  2. Right-click one of the stored procedures, and click Create Unit Tests to display the Create Unit Tests dialog box.
  3. Verify the Unit test project you are adding it to.
  4. Name the class something meaningful like TestProcDoesSomething.cs
  5. Select a connection. do you test locally on either (localdb) or localhost, or remotely on your dev or build server?
  6. Deploy the database project before testing if you are running this locally or on a personal dev vm. If running on build or shared dev boxes you may want to have something more explicit deploying.
  7. Add a test condition in the Test Conditions window. Note that scalar results are case sensitive since the programming language executing the test is C#.
  8. Right click - Properties - Add values for test and provide valid name.
  9. Delete the default inconclusive test condition.This test condition is included to indicate that test verification has not been implemented.



http://msdn.microsoft.com/en-us/library/jj851212(v=vs.103).aspx#CreateDBUnitTests

Friday, December 20, 2013

Learning MVC Post

I am starting with the premise that the out of the box layout supporting separation of concerns will make learning MVC .Net development easy enough to do on my own. The project scope is to make a simple web based data generator that uses Windows authorization and the defined database security to allow people to create test data in an environment they have permission to do CRUD operations in w/o having to create an environment manager. The features will be selecting a client to create the record set for, ensuring the data generated maintains referential integrity across 15 defined data sources, that the output will be a correctly delimited set of flat files that have names identifying them as a single test set. Business rules will be lightly modeled in the generator but it is assumed the user has some understanding of the use cases they are trying to model.

How do I link a database project to a web project?
How do I deploy my projects?
I want to use database first EF  in this as I model better in SQL.
I will need to learn some razor syntax to access methods in my classes for routes
What controls the routes and should I care?
Do I bother with CI and unit tests for this learning experience?
How do I add RI to the model
How do I conduct CRUD operations from a web UI?
How do I pull a full related record set into the app for management?
What is the difference between selecting ADO.NET Entity Data Model and EF 6.X DbContextGenerator?
How do I allow people to select from different database projects as templates for the test data? Do I use EnvironmentVariableTarget?
 
 
Let's start with this tutorial:
http://www.asp.net/mvc/tutorials/mvc-5/introduction/

MVC is a pattern before a technology. It is all about separation of concerns.
  • Models: Classes that represent the data of the application  and that use validation logic to enforce business rules for that data.
  • Views: Template files that your application uses to dynamically  generate HTML responses.
  • Controllers: Classes that handle incoming browser requests,  retrieve model data, and then specify view templates that return a response  to the browser.

  • The URL structure dictates the routes in the code. Each page will be public and should be named meaningfully.

    ASP.NET MVC invokes different controller classes (and different action methods within  them) depending on the incoming URL. The default URL routing logic used by ASP.NET  MVC uses a format like this to determine what code to invoke:
    /[Controller]/[ActionName]/[Parameters] 
     A method named Index is the default method that will be called on a controller if one is not explicitly specified. 

     
    Step 1... Think about how you want to move through the site. Make sketches of the information architecture. Write out some requirements. Do that in the reverse order of what I write here.

    If this is a production site think of layouts, fonts, colors, dynamic design elements and the other crap people take for granted as differentiators of web products.

    Create an MVC project in TFS (File - New  - Project - Template).
    Add the unit tests now as it is easier to add the folders and not use them than to need to add them later.

    Set the security model you expect to use. If this requires AD groups to be setup...do it now. Avoid rework.

    Add a few pages and your basic branding. I need to look up the files I did this in, but it helped me quickly represent the basic information arcitecture in the view classes. It also makes my app look less like ass when I debug.


    To add settings right click the project - Properties - Settings
    I added two settings before coding: GenerateFiles as a bool and SecurityModel as a string.

    Use (localdb)\Projects as the initial source of your model.
    Link the database project to the web project, and in the web project make the database project a dependency.

    When learning MVC do not take the advice of some tutorials to use a file based storage database linked to the App_Data folder. This is not they way you would build a real site (unless your site has very limited functionality), so learn the right way by creating a real database (e.g. using SQL Express at home will want to add to App_Data...make it put the mdf file someplace outside the folder for the web tier).

    In solution explorer right click Models - Add New -
     Select Data from left side pane.
    b.      Select the ADO.NET Entity Data Model.
    c.       Give the name of your Entity Framework.
    d.      After that press OK.
    (should I select the db context generator here if I am not using VS 2013)?
    Generate from database.
    New Connection if you did not create your web.config already
      SQL Server -
      name of a database server with your writable database
      database name
      security model
      test connection
      (Advanced - Encrypt if you want, connect retry, pooling info, replication for sql server, context     connection, initial catelog if different than the db you are selecting)

    Select your tables name the thing, take defaults unless you have a reason not to.
    Right click in the black space. Add new association:
    Include each PK/FK relationship and indicate the correct cardinality for these objects.



    One struggle is know which of the many articles are useful

    Typically when you generate an Entity Framework (.edmx) from database and then try to create a MVC controller with read/write actions based on this Entity Framework,


    Deploying SQL stuff with the website  varies by complexity of the database and rows of data in source control:
    create a database
    Setup the web config
    Right click the project - Properties - Settings
    There is a Package Publish SQL tab that pulls the connection info from the web config
    There is a switch on the Package Publish Web tab to include/exclude the Package Publish SQL settings
    http://msdn.microsoft.com/en-us/library/bb907585(v=vs.100).aspx
    http://msdn.microsoft.com/en-us/library/dd465343%28VS.100%29.aspx




    This is common sense but "If you are deploying a database to a production server, you might want to prevent the Web site from responding to page requests during the deployment process. This can help avoid errors that might result during the time when changes are only partially completed." How to: Prepare to Deploy a Web Project.

    http://www.asp.net/mvc/tutorials/hands-on-labs/aspnet-mvc-4-models-and-data-access

    Notehttp://www.mikesdotnetting.com/Article/185/Building-A-Razor-Web-Pages-Site-With-Database-First-And-Entity-Framework

    http://jefferytay.wordpress.com/2013/02/19/asp-net-mvc4-how-to-use-a-database-first-ef-in-a-mvc-controller/

    http://www.c-sharpcorner.com/uploadfile/raj1979/unit-testing-in-mvc-4-using-entity-framework/

    http://www.mindstick.com/Articles/6dfea253-4698-4eb8-ab20-57d2aa61753a/?Fetch%20Data%20from%20Entity%20Framework%20Using%20ASP%20NET%20MVC4

    http://sixrevisions.com/usabilityaccessibility/information-architecture-101-techniques-and-best-practices/


    http://failuresincoding.blogspot.com/2011/12/entity-framework-provider-did-not.html


    http://msdn.microsoft.com/en-us/data/jj613116.aspx

    Links to articles on POCO and DTO...state manangement of data objects outside the framework.
    A POCO follows the rules of OOP. It should (but doesn't have to) have state and behavior. POCO comes from POJO, coined by Martin Fowler [anecdote here]. He used the term POJO as a way to make it more sexy to reject the framework heavy EJB implementations. POCO should be used in the same context in .Net. Don't let frameworks dictate your object's design.
    A DTO's only purpose is to transfer state, and should have no behavior. See Martin Fowler's explanation of a DTO for an example of the use of this pattern.
    Here's the difference: POCO describes an approach to programming (good old fashioned object oriented programming), where DTO is a pattern that is used to "transfer data" using objects.
    While you can treat POCOs like DTOs, you run the risk of creating an anemic domain model if you do so. Additionally, there's a mismatch in structure, since DTOs should be designed to transfer data, not to represent the true structure of the business domain. The result of this is that DTOs tend to be more flat than your actual domain.
    In a domain of any reasonable complexity, you're almost always better off creating separate domain POCOs and translating them to DTOs. DDD (domain driven design) defines the anti-corruption layer (another link here, but best thing to do is buy the book), which is a good structure that makes the segregation clear.

    http://stackoverflow.com/questions/725348/poco-vs-dto

    Wednesday, November 13, 2013

    Enabling Change Data Capture - SQL Server

    Before a capture instance can be created for individual tables, a member of the sysadmin fixed server role must first enable the database for change data capture. This is done by running the stored procedure sys.sp_cdc_enable_db (Transact-SQL) in the context of the correct database.

    To determine if a database already has CDC enabled,

    SELECT name, CASE WHEN is_cdc_enabled = 0 THEN 'Off' ELSE 'CDC On' END, compatibility_level FROM sys.databases

    WHERE name like '%%'

     

    Monday, October 28, 2013

    Informatica Debugging - Setting the PMError Log

    Recently I hit some transformation errors. By default these errors are swallowed by the Integration Engine. It does not try to write the row to target.It does not add row level details to the session log.

    Looking at the session log only reveals a hint of what the error may be:

    2013-09-03 09:10:38 : INFO : (11652 | TRANSF_1_1_1) : (IS | DOMAIN) : node01_SERVER : CMN_1053 : : Rowdata: ( RowType=-4(error) Src Rowid=854 Targ Rowid=854
      lkp_Key_(lkp_EDW_ID_Passthrough:Int:): "(NULL)"
      lkp_Start_DT (lkp_Start_DT:Date:): "11/27/2012 00:00:00.000000000"
    )

    There was no clear definition in Communities for this error. The sess log was not clear where it occured.
    I had my QA person do the following to the session throwing the transformation error
    Check out and open the session
    Click Config Object
    Scroll to Error Handling
    Change Error Log Type - Flat File
    Change Error Log File Directory to $PMLookupFileDir
    Save and run the session
    Undo the checkout...as this change makes the session run at a row by row level that can impact performance.

    The following was produced.

    Transformation||Transformation Mapplet Name||Transformation Group||Partition Index||Transformation Row ID||Error Sequence||Error Timestamp||Error UTC Time||Error Code||Error Message||Error Type||Transformation Data||Source Mapplet Name||Source Name||Source Row ID||Source Row Type||Source Data
    FIL_Expire||N/A||Output||1||38||1||09/03/2013 16:34:19||137829999||11999||There is an error in the port [lkp_Key]: The default value for the port is set to: ERROR(Transformation error: FIL_Expire).||3||D:004sfsdfsdfs655E914DD833FF5CBAA88A|D:11/27/2012 00:00:00.000000000|D:1|D:0||N/A||SQ_Shortcut_to_SourceOfDimensionTbl123||38||0||D:0040484394655E914DasdasdasdA88A|D:12/13/2012 00:00:00.000000000|D:6384 XMJWRFS XY|N:|D:Dim1|D:Dim2|D:Dim3|D:Dim4

    This was 100% more useful than the sess log, as I use that lkp port in several objects, it was now clear which one and which port was impacted.

    Permissions Deployment Script

    /*
        Thanks to Chris K.
        Edit the table variables' values for custom roles, users (+ their logins and tiers), and user-role mappings.
        Server logins must pre-exist.
    */

    --CHANGE THIS: matches databse you are deploying to
    USE DataBaser123

    --CHANGE THIS: D = Dev, Q = QA , U = UAT , P = Prod, D = DR
    DECLARE @tierCode CHAR(1) = 'D'

    set nocount on;

    -- define custom db roles
    declare @customDbRole table (customDbRoleNm sysname not null primary key,
                                 permission nvarchar(4000))
    insert  @customDbRole (customDbRoleNm, permission)
    values ('db_executor', N'grant execute to db_executor');                           


    -- define db users
    declare @dbUser table ( dbUserID int not null,      -- unique identifier for user that is consistent across tiers
                            tierCode char(1) not null,  -- [D|Q|U|P]
                            dbUserName sysname not null,
                            loginName sysname not null,
                            primary key ( dbUserId,
                                          tierCode ) )

    insert @dbUser (dbUserID, tierCode, dbUserName, loginName)
    values

      
        -- some users

        ( 1, 'D', 'domain\user11', 'domain\user11'),
        ( 1, 'Q', 'domain\user1, 'domain\user1'),
        ( 1, 'U', 'domain\user1D1', 'domain\user1D1'),
        ( 1, 'P', 'domain\user1', 'bubba'),  

         -- ssrs users
        ( 2, 'D', 'NW\ssrs_user_dev', 'NW\ssrs_user_dev'),
        ( 2, 'Q', 'NW\ssrs_user_qa',  'NW\ssrs_user_qa'),
        ( 2, 'U', 'NW\ssrs_user_uat', 'NW\ssrs_user_uat'),
        ( 2, 'P', 'NW\ssrs_user_prod','NW\ssrs_user_prod'),
      
        -- Informatica Service Accounts
        ( 3, 'D', 'informatica_read', 'informatica_read'),
        ( 3, 'Q', 'informatica_read', 'informatica_read'),
        ( 3, 'U', 'informatica_read', 'informatica_read'),
        ( 3, 'P', 'informatica_read', 'informatica_read'),
        ( 4, 'D', 'informatica_write', 'informatica_write'),
        ( 4, 'Q', 'informatica_write', 'informatica_write'),
        ( 4, 'U', 'informatica_write', 'informatica_write'),
        ( 4, 'P', 'informatica_write', 'informatica_write');


    /*
    define user-role mapping
    ROLES: 'db_owner' 'db_datareader' 'db_datawriter' 'db_executor' 'db_ddladmin'
    */
    declare @dbUserDbRole table ( dbUserID int not null,
                                  dbRoleNm sysname not null,
                                  primary key (dbUserId, dbRoleNm) )

    insert @dbUserDbRole (dbUserID, dbRoleNm)
    values (1, 'db_datareader'),
           (1, 'db_datawriter'),
           (1, 'db_executor'),
           (2, 'db_datareader'),     
           (3, 'db_datareader'),
           (3, 'db_executor'),
           (4, 'db_datareader'),
           (4, 'db_datawriter'),
           (4, 'db_executor');


    ------------------------
    -- COMMON BLOCK BELOW --
    ------------------------

        -- derive at which tier permissions are being applied (this relies on using the registry setting or that the
        -- servername defining tier as the 2nd character)

    DECLARE @outvar varchar(255)

    if @outvar is not null
    begin
        select  @tierCode = substring(@outvar,1,1);
    end
    else
    begin
        select  @tierCode = case when @@servername like 'BV%'
                                 then 'D'
                                 else substring(@@servername,2,1)
                            end;
    end;

    declare @printStmt varchar(8000);
    select  @printStmt = 'Applying permissions for ' +
        case when @tierCode = 'D' then 'Development'
             when @tierCode = 'Q' then 'QA'
             when @tierCode = 'U' then 'UAT'
             when @tierCode = 'P' then 'Production'
             else 'Uncertain'
        end;
    print @printStmt;

    -- output information about existing users that are untouched:
    declare @untouchedUser table ( unTouchedUserNm sysname not null primary key)
    insert @untouchedUser ( unTouchedUserNm )
    select  princ.name
    from sys.database_principals princ
    where princ.type in (N'U',N'S')
      and name not in ('public','dbo','guest','information_schema','sys')
      and name not in (select dbUserName from @dbUser where tierCode = @tierCode);

    while exists (select 1 from @untouchedUser)
    begin
        declare @uun sysname;
        select top 1 @uun = unTouchedUserNm,
               @printStmt = 'The following user exists in the target but is not source controlled: ' +
                            unTouchedUserNm
        from @untouchedUser
        order by unTouchedUserNm;
      
        print @printStmt;
      
        delete @untouchedUser
        where unTouchedUserNm = @uun;
    end;

    -- apply permissions

    declare @sql nvarchar(4000) = null,
            @rc int = null;

        -- custom roles
    while exists (select 1 from @customDbRole)
    begin
        declare @customDbRoleNm sysname, @permission nvarchar(4000);
        select top 1 @customDbRoleNm = customDbRoleNm, @permission = permission
        from @customDbRole
        order by customDbRoleNm;

        select @sql = N'if not exists (select 1 from sys.database_principals WHERE name = N''' + @customDbRoleNm + ''' and type = ''R'') create role [' + @customDbRoleNm + ']';
        exec @rc = sp_executeSql @sql;
        if @rc <> 0 or @@error <> 0
        begin
            raiserror('Error from executeSql on: %s',16,1,@sql);      
            return;
        end;
      
        print @sql;

        exec @rc = sp_executeSql @permission;
        if @rc <> 0 or @@error <> 0
        begin
            raiserror('Error from executeSql on: %s',16,1,@permission);      
            return;
        end;
      
        print @permission;

        delete @customDbRole
        where customDbRoleNm = @customDbRoleNm;
    end

        -- users and role mapping
    while exists (select 1 from @dbUser where tierCode = @tierCode)
    begin
        declare @dbUserID int, @dbUserNm sysname, @loginNm sysname;
        select top 1 @dbUserId = dbUserId, @dbUserNm = dbUserName, @loginNm = loginName
        from @dbUser
        where tierCode = @tierCode
        order by dbUserId;

        select @sql = N'if not exists (select 1 from sys.database_principals WHERE name = N''' + @dbUserNm + ''') create user [' + @dbUserNm + '] for login [' + @loginNm + ']';
        exec @rc = sp_executeSql @sql;
        if @rc <> 0 or @@error <> 0
        begin
            raiserror('Error from executeSql on: %s',16,1,@sql);      
            return;
        end;
        print @sql;

        -- add user to roles
        while exists (select 1 from @dbUserDbRole where dbUserId = @dbUserId)
        begin
            declare @dbRoleNm sysname;
            select top 1 @dbRoleNm = dbRoleNm
            from @dbUserDbRole
            where dbUserId = @dbUserId
            order by dbRoleNm
          
            select @sql = N'exec sys.sp_addrolemember @rolename = N''' + @dbRoleNm + ''', @membername = N''' + @dbUserNm + '''';
            exec @rc = sp_executeSql @sql;
            if @rc <> 0 or @@error <> 0
            begin
                raiserror('Error from executeSql on: %s',16,1,@sql);      
                return;
            end;
            print @sql;      
              
            delete @dbUserDbRole
            where dbUserId = @dbUserId
              and dbRoleNm = @dbRoleNm
        end

        delete @dbUser
        where dbUserId = @dbUserID
          and tierCode = @tierCode;
    end

    Database Performance Testing Tip - Clear Cache and Buffers

    Prior to testing each query set, call dbcc freeproccache and dbcc dropcleanbuffers.

    Wednesday, October 16, 2013

    Estimating Data Integration Projects

    Not an original idea, more of a whole sale rip off of a post we found.



    T = ((E * D * 5)+(S * C * 6)) * A + F
    where
    T = total no. of days to allow for implementation
    E = no. of entities in physical model
    D = data quality (D = 0.5 very clean, D = 1 medium cleanliness, D = 1.5 dirty, a lot of work)
    S = no. of source systems
    C = complexity of transformations (C = 1 relatively simple, C = 0.5 1 to 1 mapping, C = 2 complex, lookups required, derived data, calculations)
    A = automation factor (A = 1 ETL tool used, A = 1.6 manual coding)
    F = first time implementation, use of ETL (F = 0 many implementations done, F = 20 small & flexible, F = 50 large development)
    this estimates have been tried and proven from my project management experience in projects implementing with Informatica as ETL tool.
    try it and refine it at the end of each project.

    Wednesday, August 14, 2013

    Test a Percentage Of Rows

    There are times when working with a large data set that you may not have space or time to test every row you have processed. I use a limiting table that contains a sleect set of key values you will inner join your test query to.This throttles the test to just that set of key values, as a percentage of the total data.

    DECLARE
    --set the percent of the rows, as a whole number, you want to run these tests against
        @PercentRows INT = 10,
        @RowsToProcess INT,
        @SQL NVARCHAR(MAX)

    DECLARE @LoanTbl TABLE (Loan_ID INT)

    SET @RowsToProcess = (SELECT COUNT (DISTINCT Loan_ID)FROM TableOne (NOLOCK))*(@PercentRows*.01)
    SET @SQL = 'SELECT DISTINCT TOP '+CONVERT(VARCHAR(20),@RowsToProcess)+' Loan_ID FROM TableOne (NOLOCK) ORDER BY Loan_ID DESC';

    INSERT INTO @LoanTbl
    EXECUTE sp_executesql @SQL

    Monday, August 12, 2013

    Find A Date Range Gap In A Daily Loaded Table

    :SETVAR DatabaseName "NAME"
    :SETVAR SourceTable "dbo.TABLE"
    :SETVAR ControlColumn "etl_date"

    USE
    [$(DatabaseName)]

    DECLARE
    @Tbl1 TABLE (ID INT IDENTITY(1,1) , Dater1 DateTime)DECLARE @Tbl2 TABLE (ID INT IDENTITY(1,1) , Dater2 DateTime)DECLARE @DateFilter DATETIME = '2013-01-05 00:00:00.000'
    INSERT
    INTO @Tbl1SELECT DISTINCT eff_start_date FROM $(SourceTable)WHERE $(ControlColumn) >= @DateFilterORDER BY eff_start_date
    INSERT
    INTO @Tbl2SELECT DISTINCT eff_start_date FROM $(SourceTable)WHERE $(ControlColumn) >= @DateFilterORDER BY eff_start_date
    SELECT
    '$(SourceTable)' SourceTable,t.Dater2,CASE
    WHEN (CAST(o.Dater1 AS DATE) <> CAST(DATEADD(D,-1,t.Dater2) AS DATE)) THEN 'FAIL'
    ELSE 'PASS' END AS 'Skipped Day',CASEWHEN (DATEPART(weekday,t.Dater2) = 2) THEN 'Monday'WHEN (DATEPART(weekday,t.Dater2) = 3) THEN 'Tuesday'
    WHEN (DATEPART(weekday,t.Dater2) = 4) THEN 'Wednesday'
    WHEN (DATEPART(weekday,t.Dater2) = 5) THEN 'Thurday'WHEN (DATEPART(weekday,t.Dater2) = 6) THEN 'Friday'WHEN (DATEPART(weekday,t.Dater2) = 7) THEN 'Saturday'WHEN (DATEPART(weekday,t.Dater2) = 1) THEN 'Sunday'ELSE 'Nomads!' END DayOfWeeker FROM @Tbl1 o JOIN @Tbl2 t ON o.ID = (t.ID-1)WHERE o.ID <> 1ORDER BY Dater1 DESC