Simple DataAccess class for c# and oracle

There are lots of information out there on ORM/ Microsoft Enterprise Library Framework (Data Access Application Block)/Generic Repository etc as well as lot of patterns (Active Record/DAO/TDG) on how to structure your code for large scale applications.

However for many small scale applications (especially Desktop Based Applications) you just want to use plain old ADO.net/ODP.net.With Managed ODP.net and Publishing on Visual Studio, its a breeze to deploy these apps on any client machines. (ofcourse they need to have .net framework installed).

Here is  a very simple DataAccess Class that uses the Oracle.ManagedDataAccess that you can reuse across all your applications. It has two methods (and we can add more similar methods) one that establishes a connection and one that executes a select query that takes in a dictionary of string parameters. The second method returns a dataset which you can then read based on the needs of your application.If you want params of other types, you could make the dictionary<string,object>. Though there would be performance penalty on type conversion, you could live with that for smaller applications. Again, this is a very simplified version and you could extend it by using Generics thereby taking in any params (albeit all of them should be of same type !!)

 public class DataAccess
    {
        public static OracleConnection Connect()
        {
            string connectionString = ConfigurationManager.ConnectionStrings["myDb"].ConnectionString;
            OracleConnection connection = new OracleConnection(connectionString);
            connection.Open();
            return connection;
        }

       public string GetScalar(string selectSql,Dictionary<string,string> paramList)
       {
           
            using (var conn = DataAccess.Connect())
            {
                OracleCommand cmd=new OracleCommand(selectSql,conn);
                cmd.BindByName = true;
                cmd.Parameters.Clear();
                try
                {
                    foreach (var param in paramList)
                    {
                        cmd.Parameters.Add(param.Key, String.IsNullOrEmpty(param.Value) ? String.Empty : param.Value);
                    }
                    var colValue = cmd.ExecuteScalar();
                    if (colValue != null)
                        return colValue.ToString();
                    else
                    {
                        return "Empty";
                    }
                }
                catch (OracleException oex)
                {
                    throw oex;
                    //return "Empty";
                }
                finally{
                    conn.Close();
                    conn.Dispose();
                }
            }
       }
        public DataSet DoSelect(string selectSql,Dictionary<string,string> parameters,string tableName)
        {
            DataSet dSet1=new DataSet();
            DataTable dTable = new DataTable();

            dTable.TableName = tableName;
            
            using (var conn = DataAccess.Connect())
            {
                OracleCommand cmd = conn.CreateCommand();
                cmd.BindByName = true;
                cmd.CommandText = selectSql;
               
                cmd.Parameters.Clear();
                try
                {
                    if (parameters != null)
                    {
                        foreach (var param in parameters)
                        {
                            cmd.Parameters.Add(param.Key, String.IsNullOrEmpty(param.Value) ? String.Empty : param.Value);
                        }
                    }
                    OracleDataReader rdr = cmd.ExecuteReader();

                    if (rdr.HasRows)
                    {
                        
                        for (var i = 0; i <= rdr.FieldCount - 1; i++)
                        {
                            dTable.Columns.Add(new DataColumn(rdr.GetName(i), Type.GetType("System.String")));

                        }
                        while (rdr.Read())
                        {
                            var dr = dTable.NewRow();
                            for (var i = 0; i <= rdr.FieldCount - 1; i++)
                            {
                                dr[i] = rdr[i].ToString();

                            }
                            dTable.Rows.Add(dr);

                        }
                        rdr.Close();
                        dSet1.Tables.Add(dTable);                     
                        conn.Close();
                        conn.Dispose();
                        return dSet1;
                    }
                    else
                    {
                        return null;
                    }
                }
                catch (OracleException oex)
                {
                  
                    throw oex;
                }
                finally
                {
                    conn.Close();
                    conn.Dispose();
                }
             
            }
            
        }
    }

asp.net mvc vs ios mvc subtle difference

We all know what is the basic definition of MVC whose main aim is separate your application into three main components giving separation of concerns, ease of testing and maintainability.In short M stands for Model (domain object/ logic on the application data) , V(View) shows the UI to the user. This UI is typically created from the model data and C (Controllers) handle user interaction, work with both the model and view. The picture below is from a typical asp.net mvc app.

aspmvc

If you closely follow the arrow marks, it shows that controller has interaction with both View and a Model, View has interaction with a Model, and Model does not interact with anybody. So this means that you can have any backend database (oracle/sql server/mysql/nosql) and the V/C does not care.

Here is a similar diagram for the ios MVC . The variables M,V, C still mean the same thing as MVC is an architectural pattern. Once again if you closely follow the arrows (ignore the additional arrows since thats not relevant to our current discussion), you see that there is no arrow between a view and model in an ios app. The controller is the main central piece which does the talking with both model and the view through target/action.

iosmvc

Also if the model’s state changes, it notifies the controller and then the controller updates the view.Controller objects can also perform setup and coordinating tasks for an application and manage the life cycles of other objects.

Powershell script for searching and replacing text

I work with a lot of open source projects. It is quite common, especially during the initial learning phase, that you want to know which file is being used for what, so that you can customize it according to your requirements. It could be configuration, logging or emails etc. If you have not used the windows powershell, then i definitely recommend it. Powershell like the name says is very powerful.

Msdn says “Windows PowerShell commands, called cmdlets, let you manage the computers from the command line. Windows PowerShell providers let you access data stores, such as the registry and certificate store, as easily as you access the file system. In addition, Windows PowerShell has a rich expression parser and a fully developed scripting language.”

There are a lot of resources available online to learn more about the scripting capabilities of powershell.

The most common script that i use is searching for any text from a set of files.

If you want to search(including the subdirectories) use the recurse option. The following searches for string “searchstring” under your directory and then groups by path and outputs the filename (including the full path)


PS> cd "DirectoryName"
PS> Get-ChildItem -recurse | Select-String -pattern "searchstring" | group path | select name</code><

There are times when you want to replace a particular text with another text. For example you may have a production schema and a test schema with different names and you have sql scripts that need to target both schemas. A quick way to change from test/prod using powershell is


get-childitem *.sql |

foreach-object {$global:fname=$_;(get-content $_) |

foreach-object {$_ -replace 'PROD','TEST'} |

set-content $global:fname}

The above code replaces the string PROD found in your .sql files with TEST. Not that this is case-sensitive. It loops through each object/file that has an extention SQL and gets the file content and replaces the text. Set-content at the end, performs  a save to finish the replacement.

Here is another common requirement to delete files older than ‘n’ number of days. Could be log files, backup files, listener log files, alert log..etc

$a = Get-ChildItem *.log
foreach($x in $a)
    {
        $y = ((Get-Date) - $x.CreationTime).Days
        if ($y -gt n -and $x.PsISContainer -ne $True)   # PsISContainer checks whether the object is a folder
            {$x.Delete()}
    }

I will keep adding to this list as i find more relevant scripts…

Troubleshooting a slow legacy asp application running on IIS 7.5 and SQL server database

The other day we were getting complaints from users on one of our legacy asp application running with SQL server as the backend database. The users saw slowness across all the pages and upon further investigation i was able to replicate the slowness. However the slow performance was random and sometimes the app was behaving as usual.

On inspecting the server, i could see that the IIS process (w3wp.exe on your task manager) was consuming a lot of resources ( almost 50% of CPU when the users experienced slowness). This led me to believe that something in the legacy code was causing the trouble. But then again, this same code has been running fine for quite some time and I was not aware of any new code changes.

Perhaps, over a period of time, data in the sql server db has changed that might have caused an inefficient execution plan? The load on the db server didnt particularly concern me, though there were very small spikes during the time when IIS cpu was up.

So my first step was to use Failed Request Tracing in IIS and see whether it shows something right out of the box.

If you have not used FRT, then this blog post explains how to use failed request tracing and how to set up tracing when a request takes more time than usual. In my case some requests were taking more than 20 secs for a website when it should have returned back in milliseconds.

Unfortunately for me, the log file didnt immediately give me back the results i had hoped. I was getting this.

FRTLog

The default log file size is 500KB and so that wasnt enough to debug my issue. So i decided to increase the log file size and you can do that by doing the following . I set the size to 1MB

cd /d "%windir%\system32\inetsrv"
            appcmd set config /section:sites -siteDefaults.traceFailedRequestsLogging.maxLogFileSizeKB:1024

After the size increase, i still got the same message. The timing information in the trace file didnt reveal anything as well, as the operations were all in milliseconds. So I kept increasing the size till 10MB and still was getting the log file max size.

This indirectly helped me to realize that the asp code was executing something big that even the 10MB trace file wasnt able to capture.  (I could have tried LogParser utility to see whether there was any particular page that was causing the issue, but users issues were not restricted to a single page. They saw all across. )  I had already looked at the sql server activity monitor and ran the sys.dm_exec_requests and sys.sysprocesses and saw that the last statement that was executed was a call to particular stored procedure (when a slowness by a particular user was reported). I had already tested the procedure and it was returning data within milliseconds.

I then decided to use the sql server profiler to see the statements that were getting executed hoping that it might give me a clue. Since i was using SQL server express edition, it didnt come with a sql server profiler. On further reading i found a simple open source tool Express Profiler.

Bingo i saw one statement that was returning huge chunks of data than normal (the timing wasnt an issue,the statement was still returning in < 1 sec) . The statement was a call to the sql server stored procedure with null parameters. The stored procedure should return only one row of data. But for null param, it was returning over 20K rows. So what happened was the legacy asp code was processing over 20K rows line by line and doing something with it. This caused huge CPU spike whenever that piece was asp code was running. Though i understood what was happening, I was a bit confused because, the asp code should have called with valid parameters. So i double checked the code and made sure. The next step i did was to make a change in the code to make sure that the stored procedure was called only if the params were valid (as in not null,i dont know why it would be null) .

As soon as i did that, CPU utilization went back to normal and i realized that the problem was temporarily fixed. I say temporarily because i still dont know why the stored procedure was called with null parameters. I ran several tests to figure out if ever, anywhere jumping across pages, the param became null, but i could not find it.

I then decided to look at the IIS default logs to look at all the incoming requests. (there is probably a lesson in there..i could have looked at the logs in the first place, but then i would not have learned the other techniques). I saw a bunch of requests at regular intervals from Google Bot and Bing Bot that directly targeted the pages with some invalid query strings and i realized that there was something wrong with the code, when those query strings was invalid or in wrong format.  I modified the code to not even execute the rest of the code when a request with invalid querystring was made.

My next step was see how to prevent google bot from hitting the server and thats when the IIS request filtering came into play. You can block user-agents using this technique. The following code (added to your projects web.config ) blocks googlebot and bingbot if the request is for any asp file.

<security>
   <requestFiltering>
    <filteringRules>
     <filteringRule name="GoogleBot" scanUrl="false" scanQueryString="false">
        <scanHeaders>
         <clear />
         <add requestHeader="User-Agent" />
       </scanHeaders>
       <appliesTo>
        <clear />
        <add fileExtension=".asp" />
       </appliesTo>
      <denyStrings>
       <clear />
       <add string="Googlebot" />
       <add string="bingbot" />
      </denyStrings>
    </filteringRule>
  </filteringRules>
 </requestFiltering>
</security>

ODAC 12c and Visual Studio Developer Tools

Recently i had to work on a simple windows forms application that had to do accomplish basic CRUD functions. I thought i might be able to wire up something real quick using visual studio developer tools for oracle and ODAC. I also wanted to use the new managed ODP.net provider instead of the unmanaged provider as it would ease the deployment process. This was a very simple project and i didnt want the overhead of installing the oracle client on the client machines. That would have been an overkill. So i downloaded the latest ODAC from here  . I was currently using both Visual Studio 2010 and 2012 and so while doing the installation you need to choose the correct visual studio version.

If you already have an Oracle client installed, ( i had Oracle 11.2) then the ODAC installation automatically copies the tnsnames.ora and sqlnet.ora to the new locations under 12.1 folder. After installation, i also restarted my machine, though the installation doc didnt mandate it. (i was due for an windows update reboot anyways) .

To test my oracle connections i used the Add Connection dialog from Server Explorer in Visual Studio, and chose Under Data Source, ODP.net Managed Provider.

Upon doing so, I realized that it was not populating the tnsnames entry on the Data Source Name drop down box, if I chose the Managed Provider. However if changed the option to Unmanaged provider, it was populating the tnsnames entries. It was a bit baffling to me, because i was able to tnsping,make an sqlplus connection to the same oracle database (and it was using the tnsnames).

So then i tried using the EZConnect option  (under connection Type) and gave the full details of my database hostname,server etc.
This time I got an error OracleInternal.Network.Addressresolution exception. This was again weird to me. So i googled online to see anyone else had this message, and i found that this message came along with an inner exception indicating why the address resolution failed. However i didnt get any, so i was not sure what was the cause.

I decided to do couple of things

1)I created the TNS_ADMIN environment variable to point to location where my 12.1 tnsnames file was.  ORACLE_HOME/network/admin/

2)I created a simple c# console app (to find out the innerException), and added the connectionString tag to the app.config to point to my database


<add name="MYConnectionString" connectionString="DATA SOURCE=xxx;PASSWORD=xxx;USER ID=xxx"
 providerName="Oracle.ManagedDataAccess.Client" />

and wrote a simple test app trying to get a count from a table in the db. Upon debugging, i found that the app threw the same network resolution error and when i looked at the inner exception, it mentioned that the address resolution failed while reading ldap data. I dont know why it threw that error, because in my sqlnet.ora, i have TNSNAMES first and then LDAP.ora.

My thought was maybe the new managed provided was also trying to look at LDAP.ora and so I decided to copy the ldap.ora from my other Oracle Home to the new 12.1 Oracle Home. Then Voila my console app worked and then when i tried to create a new connection from the add connection dialog, i was able to see my tnsnames entry populated.

OdpConnection

Update: Now after its been working a while, i decided to remove the LDAP.ora file from the TNS_ADMIN location and now it was still populating the tnsnames fine !! I am guessing now that addition of TNS_ADMIN environment variable was good enough? But i swear that was the first thing that i tried, but it didnt populate the entries then. I will update when i find more on this.

Grant object access from Application Schema to another schema in oracle

On many projects you have a single application schema which contains all the objects necessary for the application. When you have to develop an external utility (external utility can be an desktop or web app or even a mobile app) that has to connect to the application schema, it makes sense to give access to only those objects that the external utility requires.

Lets call the application schema app_schema.

Our first step is to create the external schema and grant the basic schema privileges

SQL>create user ext_schema identified by 'something';

SQL>grant connect to ext_schema;

Once you create the user, your next step is identify the tables that the external schema needs access.The grant statement must be be executed from a schema(either the current application schema or a sys or another schema) which has the necessary privileges

SQL>grant select on app_schema.table_name to ext_schema;

This is one way of doing this..but a better practice is to create a role and assign all the object privileges to that new role and later assign that role to the ext_schema user.

SQL>create role ext_app_role;     --run as a privileged user who has the permissions to create role

--if for example you want to give select access to all tables in the app_schema to ext_schema a script like this would accomplish the task

SQL>begin
 FOR x IN (select * from dba_tables where owner='APP_SCHEMA')
 LOOP
 EXECUTE IMMEDIATE 'GRANT SELECT ON APP_SCHEMA.' || x.table_name || ' TO ext_app_role';
 END LOOP;
end;
/

If you want to give insert/update on select tables, you need to give similar grants as well such as

SQL>grant insert, update on app_schema to ext_app_role

Next, grant the new role to the external schema.

SQL>grant ext_app_role to ext_schema;

There is still one more thing to do. If you have to access the app_schema from ext_schema you would have to type to fully qualified name such as app_schema.table_name to get access.

so if you login as schema ext_schema and execute a statement such as below,

SQL>select * from app_schema_table

then you will get a table or view does not exist error.

If you want to avoid that and rather just use table_name instead of using schema_name.table_name, then you need to create private syonyms by logging in as ext_schema

SQL>connect ext_schema@yourdb
SQL>begin
 FOR t IN (SELECT distinct table_name FROM role_tab_privs WHERE role = 'ext_app_role' and owner = 'app_schema')
 LOOP
 EXECUTE IMMEDIATE 'CREATE SYNONYM ' || t.table_name || ' FOR app_schema.' || t.table_name;
 END LOOP;
end;
/
--then you would be able to access like this

SQL>select * from app_schema_table;

Flashback database and sizing undo for flashback query

Flashback technology has been there in some form since oracle 9i. But with newer versions(10g,11g and 12c) this feature is significantly improved and a very valuable tool for recovering from any sort of errors (human,application or other logical).

There has always been some confusion on the flashback query options vs flashback database. If you want to do flashback database (this means that the database can be reverted back to a specific point in time), then you need to enable the option using the following settings. If you are on 10g, then you must shutdown cleanly and mount the database to perform the following operations. For 11g and above, you can do it while the instance is open.Its also important that the database must be in archivelog mode.


sql>alter database set db_flashback_retention_target=4320;   #3 days

sql>alter database flashback on;

This means that you should have the ability to rewind the whole database to any point in time within the last three days (set up by the parameter db_flashback_retention_target). Flashback uses its own logging mechanism and uses flashback logs in the fast_recovery_area. You should also have the archive logs for the last 3 days to make the database state consistent. This is because, flashback logs contain old versions of the block before a change is made. When a rewind is performed,the database restores the version of each block that is immediately before the target time. The database then uses redo logs to reapply changes that were made after these blocks were written to the flashback logs.

However if you want to flashback a table or check the state of a table as of some point in time instead of the whole database, then you dont need to enable any option on your database. All you need to to is size your undo tablespace according to your flashback query requirements. Lets say for your database, you want the ability for flashback query to go back by 3 hours, then you need to set your undo retention to the same value. This means that if you have adequate size for the undo tablespace , the database will keep undo data till undo retention for you to use flashback query. The default value of undo_retention is 900secs and so is not sufficient for our requirements. So lets change it to 3 hrs.


sql>alter system set undo_retention=10800 scope=both;--(60*60*3) (3 hrs)

If you use autoextensible undo tablespace, and you change the value of undo_retention, then the database will size your undo tablespace accordingly. However if you want to use fixed undo tablespace , or you want to know well in advance how much space is required for your undo tablespace , the following steps will help you determine the size of your desired undo tablespace.

You can determine the current undo size by using the following query (this step is optional, but helpful to see how much you undo is currently consumed, giving you an idea of how much you need to increase).

SELECT (SUM(df.bytes)/1024/1024) UndoMB
FROM v$datafile df,v$tablespace ts,dba_tablespaces ds
WHERE ds.contents = 'UNDO'
AND ds.status = 'ONLINE'AND
ts.name = ds.tablespace_name
AND df.ts# = ts.ts#;

Calculate the maximum undo blocks generated per second

SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
INTO v_undo_blocks_per_sec
FROM v$undostat ;

Then the simple calculation below should give you the size of your desired undo tablespace.

   database_block_size * max_undo_blocks_generated * undo_retention

SHA-512 encryption c#

Recently we had to write an external utility that talks to a third party. The third party required us to authenticate through their SHA-512 encryption and we were not given what kind of encoding we may have to support. They were using java and its version of SHA.

If they were using unicode UTF-16 encoding, there would have been a difference in which c# vs java handles little endian vs big endian. Java uses big endian byte order whereas c# defaults to little endian. However if they were using UTF-8, then it works the same for both languages.In our case, it was indeed UTF-8 and so our implementation was fairly straightforward.

public static string EncryptUsingSHA(string text)
 {
  string hex = "";
  SHA512 alg = SHA512Managed.Create();
  byte[] result = alg.ComputeHash(Encoding.Default.GetBytes(text));
  string hash = Encoding.UTF8.GetString(result);
  foreach (byte x in result)
  {
   hex += String.Format("{0:x2}", x);
  }
  return hex;
}

If you were to encounter UTF-16 between java and c# you would have to specify the little endian encoding in Java (UTF-16LE) and use the default UTF-16 encoding in c#.