Friday, March 22, 2013

Regex in C# to get UTC date...

I recently needed to find a UTC date in a string of text, and I thought it might be handy to pull the various values from the dates that were found by getting the values from the returned Groups for each Match. Groups are identified in your regular expression string by surrounding sections with parentheses. The first Group of every match is the string value that the Match found. Every other group in the Match is identified by the parentheses going from left to right. So, if you have a regular expression that looks like this:
"((\d\d\d\d)-(\d\d)-(\d\d))T((\d\d):(\d\d):(\d\d))Z"
Imagine that someone used the above regular expression on the following string.
"This is a UTC date : 1999-12-31T23:59:59Z. Get ready to party!"
The very first group (after the matched date/time string) will be the entire date, because the first set of parentheses completely wraps the date portion of the string.
"1999-12-31"
The next group would be the year portion of the date, since the next set of parentheses completely wraps the year.
"1999"
That pattern is repeated for the rest of the regular expression string. If no parentheses (groupings) are specified, then there will only be the one group and it will contain the string that the regular expression matched. Here is an example of how to do this in code:
static void Main(string[] args)
{
    string input = "this\tis\ta test 2013-03-21T12:34:56Z\tand\tanother date\t2013-03-21T23:45:01Z";
    string regexString = @"((\d\d\d\d)-(\d\d)-(\d\d))T((\d\d):(\d\d):(\d\d))Z";
    TestRegex(input, regexString);
}

private static void TestRegex(string input, string regexString)
{
    int matchCount = 0;
    foreach (Match match in Regex.Matches(input, regexString))
    {                
        int groupCount = 0;
        foreach (Group group in match.Groups)
        {
            Console.WriteLine("Match {0}, Group {1} : {2}", 
                                matchCount, 
                                groupCount++, 
                                group.Value);    
        }
        matchCount++;
    }
}
Here is the output:
Match 0, Group 0 : 2013-03-21T12:34:56Z
Match 0, Group 1 : 2013-03-21
Match 0, Group 2 : 2013
Match 0, Group 3 : 03
Match 0, Group 4 : 21
Match 0, Group 5 : 12:34:56
Match 0, Group 6 : 12
Match 0, Group 7 : 34
Match 0, Group 8 : 56
Match 1, Group 0 : 2013-03-21T23:45:01Z
Match 1, Group 1 : 2013-03-21
Match 1, Group 2 : 2013
Match 1, Group 3 : 03
Match 1, Group 4 : 21
Match 1, Group 5 : 23:45:01
Match 1, Group 6 : 23
Match 1, Group 7 : 45
Match 1, Group 8 : 01

Thursday, March 21, 2013

The return of Super Sed and Wonder Awk...

I needed to compare the tabbed separated data of a file (file A) to expected data (file B).  However, the contents of file A contained the processing date in each line of output in the file.

To handle the issue of non-matching dates, the "processing date" in file B was updated to be the string "PROCESSING_DATE".  That just left the date in file A to contend with.

Here is where sed and awk came to the rescue.  I used head -n1 to get the first line of file A, and used awk to get the processing date (which appeared in the 11th column).  The processing date was stored in a variable named target_date. Next, I used sed to do a replacement on all instances of target_date in file A.  After which I was able to do a diff on the two files to see if the output was as expected.

Here is how it looked in the shell script:

# get the target date
target_date=`head -n1 fileA.txt |  awk '{print $11}'`
# get the sed argument using the target date
sed_args="s/$target_date/PROCESSED_DATE/g"

# do an inline replacement on the target file
sed -i $sed_args fileA.txt

# check for differences
diffs=`diff fileA.txt fileB.txt`

if [ -n $diffs ]; then
    echo "There were differences between the files."
else
    echo "No differences were found."
fi

Wednesday, March 20, 2013

Blog title image disappeared...

I have no idea what happened, but my blog title image disappeared.  I'm not sure how long it was missing, but perhaps a couple of days. I ended up uploading the image to a website, and then using the URL to reference where the image was located.  

I found some recent information from others where their images disappeared, but nothing that seemed identical to my situation. The other mentions of missing images all pointed to Picasa web albums being deleted, and that was why the images were missing. I didn't delete anything recently, so I don't think that was my issue. However, I assumed that when the layout control gives you an option to select an image from your computer that it was copying the image to whatever disk space is used to host your blog entries.  Perhaps that was a bad assumption.

Has anyone else experienced the problem of blogger title images disappearing?

Wednesday, March 13, 2013

Using Amazon's AWS S3 via the AWS .Net SDK

Amazon's AWS S3 (Simple Storage Service) is incredibly easy to use via the AWS .Net SDK, but depending on your usage of S3 you might have to pay. S3 has a free usage tier option, but the amount of space allowed for use is pretty small by today's standards (5GB). The upside is that even if you end up going outside of the parameters for the free usage tier it is still cheap to use.

Here is some information from Amazon regarding the free usage tier limits for S3:

  • 5 GB of Amazon S3 standard storage, 20,000 Get Requests, and 2,000 Put Requests
  • These free tiers are only available to existing AWS customers who have signed-up for Free Tier after October 20, 2010 and new AWS customers, and are available for 12 months following your AWS sign-up date. When your free usage expires or if your application use exceeds the free usage tiers, you simply pay standard, pay-as-you-go service rates (see each service page for full pricing details). Restrictions apply; see offer terms for more details.


Sign Up To Use AWS

You need to create an account in order to use the Amazon Web Services. Make sure you read the pricing for any service you use so you don't end up with surprise charges. In any case, go to http://aws.amazon.com/ to sign up for an account if you haven't done so already.

Install or Reference the AWS .Net SDK

To start using the AWS .Net SDK to access S3 you will want to either download the SDK from Amazon or use NuGet via Visual Studio. Start Visual Studio (this example is using Visual Studio 2010), and do the following to use NuGet to fetch the AWS SDK:


  • Select the menu item "Tools | Library Package Manager | Manage NuGet Packages For Solution..."
  • Type "AWS" in the "Search Online" search text box
  • Select "AWS SDK for .Net" and click the "Install" button
  • Click "OK" on the "Select Projects" dialog

Create a Project and Use the AWS S3 API

Create a project in Visual Studio, and add the following code:


string key = "theawskeythatyougetwhenyousignuptousetheapis";
string secretKey = "thesecretkeyyougetwhenyousignuptousetheapis";

// create an instance of the S3 TransferUtility using the API key, and the secret key
var tu = new TransferUtility(key, secretKey);

// try listing any buckets you might have
var response = tu.S3Client.ListBuckets();

foreach(var bucket in response.Buckets)
{
   Console.WriteLine("{0} - {1}", bucket.BucketName, bucket.CreationDate);

   // list any objects that might be in the buckets
   var objResponse = tu.S3Client.ListObjects(
      new ListObjectsRequest 
      {
         BucketName = response.Buckets[0].BucketName
      }
   );

   foreach (var s3obj in objResponse.S3Objects)
   {
      Console.WriteLine("\t{0} - {1} - {2} - {3}", s3obj.ETag, s3obj.Key, s3obj.Size, s3obj.StorageClass);
   }
}

// create a new bucket
string bucketName = Guid.NewGuid().ToString();
var bucketResponse = tu.S3Client.PutBucket(new PutBucketRequest
   {
      BucketName = bucketName
   }
);

// add something to the new bucket
tu.S3Client.PutObject(new PutObjectRequest
   {
      BucketName = bucketName,
      AutoCloseStream = true,
      Key = "codecog.png",
      FilePath = "C:\\Temp\\codecog.png"
   }
);

// now list what is in the new bucket (which should only have the one item)
var bucketObjResponse = tu.S3Client.ListObjects(
   new ListObjectsRequest
   {
      BucketName = bucketName
   }
);

foreach (var s3obj in bucketObjResponse.S3Objects)
{
   Console.WriteLine("{0} - {1} - {2} - {3}", s3obj.ETag, s3obj.Key, s3obj.Size, s3obj.StorageClass);
}

Thursday, March 7, 2013

Micro ORM Review - FluentData

Who doesn't love tools that make your life easier? Make a database connection and populate an object in just a few lines of code and one config setting? That's what FluentData can offer. Sign me up! 

Some Features:
  • Supports a wide variety of RDBMs - MS SQL Server, MS SQL Azure, Oracle, MySQL, SQLite, etc.
  • Auto map, or use custom mappers, for your POCOs (or dynamic type).
  • Use SQL, or SQL builders, to insert, update, or delete data.
  • Supports stored procedures.
  • Uses indexed or named parameters.
  • Supports paging.
  • Available as assembly (download the DLL or use NuGET) and as a single source code file.
  • Supports transactions, multiple resultsets, custom return collections, etc.

Pros:
  • Setting up connection strings in a config file, and then passing the key value to a DbContext to establish a connection is such an easy way to do things.  It made it very easy to have generic code point to various databases.  I'm sure that is the intent. Needing to declare a connection object, set the connection string value for the connection object, and then calling the connection object's "Open" method seems undignified. :D It's really not that big of a deal, but I like that it seemed much more straight forward using FluentData.
  • It's very easy to start using FluentData to select, add, update, or delete data from your database.
  • It is easy to use stored procedures.
  • Populating objects from selects, or creating objects and using them to insert new data into your database is almost seamless.
  • Populating more complex objects from selects is fairly easy using custom mapper methods.
  • The exceptions that are thrown by FluentData are actually helpful. The contributors to/creators of FluentData have been very thoughtful in how they return error information.


Cons:
  • I had some slight difficulty setting a parameter for a SQL select when the parameter was used as part of a "like" for a varchar column.  The string value in the SQL looked like this: '@DbName%'.  I worked around the issue by changing the code to use this instead: '@DbName', and then set the value so that it included the %.

I originally thought that I couldn't automap when the resultsets return columns that don't map to properties of the objects (or are missing columns for properties in the target object) without using a custom mapping method. However, there is a way - you can call a method on the DB context to say that automapping failures should be ignored:

Important configurations
  • IgnoreIfAutoMapFails - Calling this prevents automapper from throwing an exception if a column cannot be mapped to a corresponding property due to a name mismatch.
Example Usage:

First, I created a MySQL database to use as a test. I created a database called ormtest, and then created a couple of tables for holding book information:

create table if not exists `authors` (
  `authorid` int not null auto_increment,
  `firstname` varchar(100) not null,
  `middlename` varchar(100),
  `lastname` varchar(100) not null,
  primary key (`authorid` asc)
);

create table if not exists `books` (
 `bookid` int not null auto_increment,
 `title` varchar(200) not null,
 `authorid` int,
 `isbn` varchar(30),
 primary key (`bookid` asc)
);

Next, I created a Visual Studio console app, added an application configuration file, and added a connection string for my database:


  
    
  


Then I created my entity types:

public class Author
{
 public int AuthorID { get; set; }
 public string FirstName { get; set; }
 public string MiddleName { get; set; }
 public string LastName { get; set; }

 public string ToString()
 {
  if (string.IsNullOrEmpty(MiddleName))
  {
   return string.Format("{0} - {1} {2}", AuthorID, FirstName, LastName);
  }
  return string.Format("{0} - {1} {2} {3}", AuthorID, FirstName, MiddleName, LastName);
 }
}
public class Book
{
 public int BookID { get; set; }
 public string Title { get; set; }
 public string ISBN { get; set; }
 public Author Author { get; set; }

 public string ToString()
 {
  if (Author != null)
  {
   return string.Format("{0} - {1} \n\t({2} - {3})", BookID, Title, ISBN, Author.ToString());
  }
  return string.Format("{0} - {1} \n\t({2})", BookID, Title, ISBN);
 }
}
I was then able to populate a list of books by selecting rows from the books table:
public static void PrintBooks()
{
 IDbContext dbcontext = new DbContext().ConnectionStringName("mysql-inventory", new MySqlProvider());
 const string sql = @"select b.bookid, b.title, b.isbn
         from books as b;";
   
 List<Book> books = dbcontext.Sql(sql).QueryMany<Book>();

 Console.WriteLine("Books");
 Console.WriteLine("------------------");
 foreach (Book book in books)
 {
  Console.WriteLine(book.ToString());
 }
}
Unfortunately I wasn't able to select columns from the table that didn't have matching attributes in the entity type. You'll need to create a custom mapping method in order to select extra columns that don't map to any attributes in the entity type. You can also use custom mapping methods to populate entity types that contain attributes of other entity types).

Here is an example:
public static void PrintBooksWithAuthors()
{
 IDbContext dbcontext = new DbContext().ConnectionStringName("mysql-inventory", new MySqlProvider());

 const string sql = @"select b.bookid, b.title, b.isbn, b.authorid, a.firstname, a.middlename, a.lastname, a.authorid 
         from authors as a 
        inner join books as b 
        on b.authorid = a.authorid 
        order by b.title asc, a.lastname asc;";

 var books = new List<Book>();
 dbcontext.Sql(sql).QueryComplexMany<Book>(books, MapComplexBook);

 Console.WriteLine("Books with Authors");
 Console.WriteLine("------------------");
 foreach (Book book in books)
 {
  Console.WriteLine(book.ToString());
 }
}

private static void MapComplexBook(IList<Book> books, IDataReader reader)
{
 var book = new Book
 {
  BookID = reader.GetInt32("BookID"),
  Title = reader.GetString("Title"),
  ISBN = reader.GetString("ISBN"),
  Author = new Author
  {
   AuthorID = reader.GetInt32("AuthorID"),
   FirstName = reader.GetString("FirstName"),
   MiddleName = reader.GetString("MiddleName"),
   LastName = reader.GetString("LastName")
  }
 };
 books.Add(book);
}


And here is an example of an insert, update, and delete:
public static void InsertBook(string title, string ISBN)
{
 IDbContext dbcontext = new DbContext().ConnectionStringName("mysql-inventory", new MySqlProvider());

 Book book = new Book
 {
  Title = title,
  ISBN = ISBN
 };

 book.BookID = dbcontext.Insert("books")
         .Column("Title", book.Title)
         .Column("ISBN", book.ISBN)
         .ExecuteReturnLastId<int>();

 Console.WriteLine("Book ID : {0}", book.BookID);
 
}

public static void UpdateBook(Book book)
{
 IDbContext dbcontext = new DbContext().ConnectionStringName("mysql-inventory", new MySqlProvider());
 book.Title = string.Format("new - {0}", book.Title);

 int rowsAffected = dbcontext.Update("books")
        .Column("Title", book.Title)
        .Where("BookId", book.BookID)
        .Execute();

 Console.WriteLine("{0} rows updated.", rowsAffected);
}

public static void DeleteBook(Book book)
{
 IDbContext dbcontext = new DbContext().ConnectionStringName("mysql-inventory", new MySqlProvider());

 int rowsAffected = dbcontext.Delete("books")
        .Where("BookId", book.BookID)
        .Execute();

 Console.WriteLine("{0} rows deleted.", rowsAffected);
}


Summary:
FluentData has been fairly easy to use and there appears to be a way to accomplish whatever I want to do. If FluentData's documentation had more examples of how to populate entity types (POCOs), then it would have saved me a little bit of time. As it is, the documentation listed multiple ways to accomplish tasks, so it never took long to find a method that would work.