Flow statements in React and JSX

Attempting to use an if statement in JSX will quickly be met with lots of resistance. For instance, trying to conditionally render some content like so:

    if(true) {    
    	<div>some conditional text</div>   	 


This will generate errors by the JSX transpiler. As discussed in the React docs, this is because JSX transpiles into a series of JavaScript function calls and object creation, where you can’t legally use an if statement. So how do you go about hiding and showing content based on some type of condition?

There are a couple of examples on how you might achieve this in the docs. One common and well excepted option is to use a JavaScript expression:

{ shouldShow && <div>conditional content</div> }

This method takes advantage of short circuting in JavaScript. If the conditional is false, then the JSX on the right (which just compiles down to a React.createElemeent call) never gets executed.

At first this took awhile for me to get use to seeing in React codebases. But once I was use to it, it became second nature.

To take this example another step further, you can also use ternary expressions inside of JSX like so:

{ showShow ? <div>Content A</div> : <div>Content B</div> }

Ternarys become ugly quickly inside of JSX, and I wouldn’t recommend using them for less trivial examples.

However, we sometimes need if/else conditiona, so what are we to do?

We could define variables and set them conditionally outside of our return statement where conditional statements are perfectly legal. While this approach works well, I feel that it scatters the view code in multiple places, and gets hard to follow.

The React docs also recommend using an inline iife to hide the conditional inside of a function like so:

      if(false) {


      } else{



The iife method works well in most cases, but at the same time it feels hacky and adds some script to our html.

I have also seen approaches of wrapping the component in another conditional component and conditionally showing the inner components like so:

	<Maybe show={shouldShow}>

The Maybe component would then check the show prop and conditionally render it. While I like this syntax, the drawback to this method is that the internal component will always be executed, no matter the value of show. Thie is because JSX will turn the child components into React.createElement calls, and then execute those calls, and pass the result as children to the Maybe component. This isn’t a big deal for just showing an hiding text content, but could have unintended performace consequences if you were passing in other React components.

There is a promising project that lets you write you statements like the wrapped component and takes care of not executing the children if the statement is falsey. Its called JSX-Control-Statements, and they do the magic by being a babel plugin. Check it out if none of the above solutions seem to work for you.

Accessing the Http Request’s body from HttpActionExecutedContext

If you have ever tried to directly access the body of a Http request on a POST or PUT in Web Api, you might have run into some roadblocks. Unlike the MVC pipeline, Web Api accesses the request of the body through a stream for performance reasons. Once the stream is read during the model binding phase, the stream is at the end, and if you try to read it at a later time, you might notice the body is just a blank string.

Today I was writing a ExceptionFilter and wanted to log the body on certain error conditions. The HttpActionExecutedContext had the Request object, which had the Content property, but when I tried reading it (using context.Request.ReadAsStringAsync()), the result was the aforementioned blank string. This is because when the stream was read earlier in the process, the stream was left at the end. In order to get the body, you need to reset the position in the stream back to zero and then read it. The following function will return the content of the body as a string that you can use in many of Web Apis filter attributes:

private string GetBodyFromRequest(HttpActionExecutedContext context)
    string data;
    using (var stream = context.Request.Content.ReadAsStreamAsync().Result)
        if (stream.CanSeek)
            stream.Position = 0;
        data = context.Request.Content.ReadAsStringAsync().Result;
    return data;

Authenticating with SharePoint Online in an Ionic/Angular/PhoneGap app

I am working on a side project to update lists in a SharePoint online instance. The project is a hybrid mobile app using PhoneGap and the Ionic framework. I have been playing around with Ionic for a couple of months now, and so far I am really enjoying it. For starters, it uses AngularJS, which is my fav MV* framework of the moment. Also, Ionic provides great tooling to help you build and package up your app. If you haven’t checked it out yet, do so http://ionicframework.com/.

The first hurdle in the app was that I am no SharePoint expert and I don’t know the first thing about their APIs or how to work with them. I needed to figure out how to actually authenticate with the SharePoint online instance. Doing a Google search returned back so many results and different ways to accomplish authentication in SP that it was very confusing which to try. I tried a few different ways and none of them seemed to work. Then I found this great blog post. It turns out that there appears to be a difference when trying to authenticate with SharePoint Online versus an on-prem SharePoint 2013 instance, and most of the articles I was reading was regarding authenticating with an on-prem instance.

The post details how you need to send a SAML token to the Microsoft ID Service, which is the provider of authentication for nearly every MS property on the web, from Office365 to Xbox. While the post didn’t show exactly how to do this in JavaScript, it was fairly easy to make it work.

Since my Ionic app is just a web app running on a local device, I can take advantage of reusing the cookies that come from authenticating with the Microsoft ID service. The following Angular service does just that. Once you authenticate, you will have two cookies called rtFA and FedAuth. Every request you make into the Sharepoint REST APIs will now have these cookies and you will be authenticated.

The service first constructs a SAML token using a userId, password, and url of the SP Online instance. It then calls into Microsoft’s login service to obtain a authentication token. From that token, you need to extract the Bearer Token, and then pass that into Sharepoints Forms based authentication. Once that is done, you will have the needed cookies to make future API requests.

Ending the debug session in Visual Studio 2013 Preview kills IISExpress, and how to fix it

It seems that by default when you end the debugging session of an ASP.Net app in Visual Studio 2013 Preview, it will kill the IISExpress instance as well. This is fairly annoying as my workflow usually involves leaving the browser window open and just doing a new build, and then refreshing the browser. Fortunately it was an easy fix, all you need to do is go the the project properties for the web application, go to the web tab, and uncheck “Enable Edit and Continue”.

I hope that this is just a bug and will be fixed in the final release of VS2013, as the idea of finally getting edit and continue back is cool, but not at the expense of killing IISExpress.

Error running Node JS in IISExpress and IISNode

I recently started playing around with Node JS, and have been going through TekPub’s BackBone JS tutorials (which uses Node for the backend).  In the tutorial, Rob uses Webmatrix on Windows8, and I followed along, however, I ran into a problem the first time I tried to run the site using IISExpress and IISNode:

“The iisnode module is unable to start the node.exe process. Make sure the node.exe executable is available at the location specified in the system.webServer/iisnode/@nodeProcessCommandLineelement of web.config. By default node.exe is expected to be installed in %ProgramFiles%\nodejs folder on x86 systems and %ProgramFiles(x86)%\nodejs folder on x64 systems.”

I went into the web.config file, and there is a commented out section for a bunch of different options for IISNode.  One of them is nodeProcessCommandLine, which is the path to node.exe.  On 64 bit systems, this is installed in Program Files, and even though the path was correct, IISNode refused to start.  I did some searching and found this post on StackOverlow, which suggested that there is a current bug in IISNode where it always looks for node in the 32 bit path (Program Files (x86)).  The solution was to go back to Node’s website, and download and install the 32 bit version.  This worked for me, but I was surprised that more people haven’t run into this issue, and the default install from Node is the x64 version.  Hope this helps anyone else that runs into this problem.

MVC4 Quick Tip #4–Updating a model the HTTP way with ASP.Net Web API

Disclaimer: This code is based on MVC 4 Beta 1, and might change in future releases.

The www.asp.net/web-api website has a few good videos and decent examples, however, I didn’t notice a concrete way of doing an update using ASP.Net Web API.  So I thought I would quickly document how I implemented it for anyone else looking how to do so, and will welcome any comments or suggestions if I am going about this the wrong way.

ASP.Net Web API embraces HTTP and all that comes along with it, and that is one of the main differences between APIControllers and the regular controllers found in MVC.  In MVC, you would normally do all your CUD (Create, Update, Delete) through POST actions to the controller.  In ASP.Net Web API, the recommended way is to use the POST verb for creating objects, PUT for updating objects, and DELETE (you guessed it) for deleting objects.  So instead of having a controller decorated with HTTP Verbs attributes, APIControllers uses a convention in the way you name your methods.  If the method begins with Get, it must be a get request, Post, a create request,and so on.

Below is a quick example of doing a PUT request to an APIController.  Your update logic will undoubtedly   vary, but it is the outline of the method that is important:

 HttpResponseMessage<Person> PutPerson(Person person)
                var response = new HttpResponseMessage<Person>(person, HttpStatusCode.OK);
                response.Headers.Location = new Uri(Request.RequestUri, "/api/person/" + person.Id);
                return response;

The jQuery AJAX call for this method is as follows:

$.ajax("/api/person", {
                 data: person,
                 type: "PUT",
                 contentType: "application/json",
                 statusCode: {
                     200: function (data) {
                     500: function (data) {

MVC4 Quick Tip #3–Removing the XML Formatter from ASP.Net Web API

ASP.Net Web API provides a powerful new feature called content negotiation that will automatically format of your requests based on what the client asks for.  For instance, if the client sends ‘application/xml’ in the Content-Type HTTP header, then the format of your response will be XML.  The two default formatters included in Web API are the XML formatter, and the JSON formatter. The beauty of the automatic content negotiation is that you don’t need to write any code to map your models into these formats, it is taken care of for you by the formatters.  You can even create your own formatters to serve up different types as well, such as images, vCards, iCals, etc.

If all you are doing with your Web API is serving up JSON to either web or mobile clients, you might not feel the need to offer up any other format besides JSON.  This might also be helpful for quickly testing your APIs when hitting them with a web browser as some browsers send application/xml as their default content type.

Disclaimer: This code is based on MVC 4 Beta 1, and might change in future releases.

To remove the XML formatter is really easy, and I found this tip from Glenn Block, who conveniently posted this to his blog while I was searching for a way to do this.  In order to do so, put the following code somewhere in your Application_Start method in the global.asax.cs file:


MVC4 Quick Tip #2–Use Delegates to setup the Dependency Resolver instead of creating a Dependency Resolver class

I’m a big fan of cutting out unneeded or unnecessary code, so here is a tip that isn’t new in MVC4, but I just discovered it and thought it was really cool and worth sharing.  When using IOC in MVC, you setup a class that implements IDependencyResolver, which has methods to return services for a given type.  While these classes where simple, it always seemed like a bit of ceremony was required whenever starting up a new MVC app.

The following example shows what a DependencyResolver class looks like using my favorite IOC framework, Ninject:

public class NinjectDependencyResolver : IDependencyResolver
             private readonly IKernel _kernel;
             public NinjectDependencyResolver(IKernel kernel)
                 _kernel = kernel;
            public object GetService(Type serviceType)
                return _kernel.TryGet(serviceType);
            public IEnumerable<object> GetServices(Type serviceType)
                return _kernel.GetAll(serviceType);

Nothing terribly complicated, but a lot of fluff.

I saw in a demo in last week’s C4MVC talk, that the DependencyResolver that was setup to create the APIController classes had some overrides I never saw before, and low and behold, the standard MVC DependencyResolver also has these overrides:


So instead of providing the SetResolver method with a DependencyResolver class, you can just provide it two delegate methods that would normally be in the DependencyResolver (GetService, and GetServices):

                     x => kernel.TryGet(x),
                     x => kernel.GetAll(x));

POW!  Cut that 20+ line class down to a method that takes two parameters.  BAM!

MVC4 Quick Tip #1–Put your API Controllers in a different folder to avoid class naming collisions

This post is the start of a series of quick tips for Asp.Net MVC 4, which was released in beta form last week.  You can find out more about MVC4 and download the beta from www.asp.net/mvc/mvc4.

Disclaimer: This code is based on MVC 4 Beta 1, and might change in future releases.

MVC 4 ships with the new Asp.net Web APIs, which allow you to create RESTful services from within your web projects (Web Forms or MVC).  In previous versions of MVC, people commonly used Controllers to return JSON to front end websites and other clients.  While this worked for many scenarios, it was difficult to create services that were truly RESTful in nature, taking advantage of all that HTTP had to offer.

With Web APIs, you can now create “API” Controllers.  While these controllers don’t require you to follow REST, their default behavior encourages you to do so.

When starting up a new MVC 4 project, you might be tempted to put new API Controllers in the same folder  (or more accurately, namespace) as your MVC Controllers.  While this will work, you are limited in that you cannot have two classes with the same name in the same namespace.  It will be a common scenario when you have a MVC Controller with the same name as a API Controller.  For instance, you can have a MVC AccountController to serve views for Accounts in your system, and an API AccountController to respond to XHR requests from your website (or mobile devices, other services, etc..).  One way I have found to get around this limitation is to put your API Controllers in a folder called API (or whatever folder/namespace you wish), like so:


Default routes in MVC 4 will respond to MVC controllers with the route of /{controller},  and the default API route will respond to routes of /api/{controller}.  Since the AccountsController in the API folder inherits from APIController, it is okay that you have two classes named AccountsController in your project, the routing engine will know to use the correct one in the API folder.

A quick update on Rocky Mountain Tech Trifecta 2012

I have been getting a lot of inquiries on what or if anything is happening with the Rocky Mountain Tech this year.  And rightfully so, as it has nearly been a year since last year’s Trifecta took place.  

The answer to these questions is yes, there will be a Trifecta this year, and that I am currently trying to nail down the dates and logistics with our venue.  I wanted to host the Trifecta around a month later this year compared to years past, but it looks like it will be a bit longer wait then that.  I should have everything worked out in the next few days, and will be announcing the date soon, so stay tuned to www.rmtechtrifecta.com and @rmtechtrifecta.