Maintenance Driven Development

Dodgy Coder wrote a post about old school developers achieving a lot with a little. Developers such as Ken Thompson, Joe Armstrong, and Jamie Zawinski essentially eschew modern development practices and ideologies — Agile, Scrum, XP, TDD — and instead rely on thinking and print statements.

Does this really surprise anyone?

Mark Twain famously quipped he never let schooling interfere with his education. Likewise I don’t allow development methodologies to interfere with my programming.

I tend to build software both top-down and bottom-up simultaneously. I code to the middle. I first start top-down so I can establish the landscape, determine the main actors, where the core functionality lies and establish a rough sense of the required underlying data model, software services and interfaces required to support this functionality. Then I switch to the bottom-up mode to work on the data model and those software services and interfaces to ensure it’s reasonable and will perform as expected. Sometimes changes are needed and so I need to rinse and repeat until I’m fairly satisfied I have a workable solution.

Now I begin the activity most non-developers think of as development. I begin fleshing everything out and connecting it all together. I work on the hard, read risky, items first. That accomplishes two things:

  1. Assurance my solution is technically correct and I don’t need to go back to the drawing board.
  2. Validation of project schedule. If I need to go back to the drawing board then the earlier I do so the better.

So now that I’m coding how do I do it? Mainly by using print statements. Now that I’m mainly doing Java development I use log4j debug statements, but it’s the same thing. Log4j simply gives me the convenience of turning off my print statements in production without having to modify my code.

Of course I test my software, and since regression testing is essential for ongoing maintenance I capture these tests as unit tests. I have a two-fold strategy for unit testing:

  1. Coarse-grained functionality testing
  2. Fine-grained testing of risky implementation items

I’ve come to find this strategy both minimizes the amount of test cases I need to write and also minimizes the risk of bugs slipping into production.

So how is this maintenance driven development?

When I find bugs using my coarse-grained functionality tests I capture it using a newly-created fine-grained test. This way I can repeat the bug, isolate it, and work to resolve it. I proceed in this manner until all coarse-grained tests have passed.

Likewise when bugs are found in production the first step to resolving it is to create a new fine-grained test that repeats and isolates the bug. Then work to resolve it.

The bottom line is I don’t create a lot of fine-grained tests until I need them. When I need them it’s going to be for code that has been problematic at some point in the past, and now that I have a test for it I can ensure it won’t be problematic in the future.

Advertisements
Posted in Development | Tagged | Leave a comment

Functional JavaScript – Memoization, Part I

Introduction

I recently attended Neal Ford’s functional thinking presentation at COJUG. Neal aims to introduce functional programming concepts to traditional developers. Neal covered a lot of ground but for now I want to focus on one aspect of his presentation: memoization.

Memoization

A key concept of functional programming is a function’s result is solely determined by its inputs. A function called with the same inputs produces the same result every time. The result is not determined by some transient state not captured by the inputs.

Since the same inputs are always mapped to the same result then we can cache the result in a key/value map and return the result from the cache, if present, instead of calling the function. In the case of a cache miss we would call the function with the inputs and insert the result into the cache. Such function result caching is called memoization. Memoization is a useful technique for functions that are called repeatedly with the same set of inputs but whose result is relatively expensive to produce.

Higher-Order Functions

Functions are first-class objects in functional languages. This means functions can be assigned to variables, passed as inputs to other functions and so on. The ability to provide a function as an input to another function allows us to create higher-order functions. When we say ‘provide a function as an input’ we’re talking the function itself, not the result of the function applied to some set of inputs. You simply can’t do such things in non-functional languages.

We can do some pretty amazing things by combining the ability to provide a function implementation as the input to another function with the ability to generate functions at runtime as needed, via function generators. One of those things is creating a higher-order memoization function generator. We can create a memoized version of a function as needed at runtime and then use the memoized version wherever we would have used the non-memoized version. They’re interchangeable.

Function Generator for Creating Memoized Functions

Creating a memoization function generator in JavaScript is simple:

// Return a memoizing version of a function f                                                                
function memoize(f) {
   if (f instanceof Function) {
      // only memoize functions of arity 1, otherwise return function                                 
      if (f.length == 0 || f.length > 1) return f;

      var fn = function(x) {
         if (fn.memoizer.values[x] == null) {
            fn.memoizer.values[x] = f.call(f,x);
         }
         return fn.memoizer.values[x];
      };

      fn.memoizer = { values : [] };
      return fn;
   } else {
      return f; // garbage in, garbage out I always say!                                               
   }
}

Analysis

Lines 7-12 is the heart of the implementation. This is the function being generated and returned. This generated function contains a memoizer property containing the cached function results. When this generated function is called it first checks its cache to retrieve the value. In the case of a cache miss it simply calls through to the function f supplied to the function generator and caches the result. In both cases the cached result is returned.

The rest of the implementation is housekeeping. In line 3 we check to ensure f is a function and if not we return f, whatever it was, in line 17. Line 5 checks the number of inputs, or arity, of f. To simplify things we only memoize functions whose arity is 1 (unary functions). There’s an elegant solution for memoizing functions of higher arity that I’ll be demonstrating in a future post.

f is returned in the case where f isn’t a unary function. This is fine since memoize(f) and f are interchangeable functions producing indistinguishable results. The only downside is the caller may believe it’s using a memoized version of the function when it isn’t. The caller can determine whether the function it’s calling is a memoized function by checking for the existence of the function’s memoizer property.

Fibonacci

To demonstrate the powers of memoization we need a function to memoize. This example utilizes a fully-recursive fibonacci sequence generator. The parameter is the 0th—based index of the fibonacci sequence whose corresponding value is to be returned. This is an extremely inefficient implementation thus illustrating the potential computational time savings of memoization.

// retrieve nth fibonacci number from sequence (0-relative)
// recursive solution for fibonacci calculation is not efficient
function fibonacci(idx) {
   if (idx == 0) return 0;
   if (idx == 1) return 1;
   return fibonacci(idx-2) + fibonacci(idx-1);
}

Memoizing Fibonacci

Creating a memoized version of this fibonacci function is simple:

var fn=memoize(fibonacci);

The function fn is the memoized version of the fibonacci function. fn may be invoked in lieu of the fibonacci function like so:

fn(35);

yielding the result 9227465 in 273ms of execution time. Once we’ve retrieved the value once for 35, subsequent invocations for 35 yield the result in 1ms, according to the Google Chrome profiler; which has a minimum resolution of 1ms—so the actual time may be less!

What About Memory Utilization?

Memoization illustrates the classic execution time vs. execution space trade-off. More memory is consumed by caching the results but there’s the potential to save a lot of execution time. If you need to clear the function result cache to free up some memory then simply do the following:

fn.memoizer.values.length=0;

and the cached values will be released.

Insufficient Memoization

The recursive definition of fibonacci may have caused you to wonder whether the intermediate results are cached. They are not. So when computing

fn(35);

we’ve also computed:

fn(34), fn(33), fn(32)…fn(2), fn(1), fn(0)

though none of these interim results are cached. In future posts we’ll consider ways to cache these interim results.

What’s Next

In future posts I’m going to further expand upon the ideas presented here addressing both the memoization of the interim results of recursive functions and memoizing non-unary functions. We’ll even be looking at ways to make the memoization generator function a property of all functions. See you then!

More to Explore

Functional JavaScript — Memoization, Part II
Functional JavaScript — Tail Call Optimization and Trampolines
Functional JavaScript — Currying
Functional JavaScript — Tail Call Optimization and Babel

Posted in JavaScript | Tagged | 1 Comment

DRY Career Advice

You’re probably familiar with Dave Thomas and Alan Hunt’s Don’t Repeat Yourself (DRY) principle first discussed in their book The Pragmatic Programmer. Broadly speaking the DRY principle can be interpreted as don’t implement the same functionality twice within a given system. Or doing the same thing two different ways within a given process.

We’re accustomed to applying the DRY principle to our designs and implementations, but what about our careers?

To paraphrase an infamous interview question what do you see yourself doing in 5, 10, 15, or 20 years? I know you still want to be programming at some level. That’s why you’re reading this blog: you’re passionate about programming and creating software. You love it. You seek to continually improve your craft. You’re going to continue programming until you retire. Even then you won’t stop and will continue programming until you expire!

Many of us are in this passionate programmer category. But we’ve also dodged the question. Given our nature and passion for programming it’s easy to say we’re going to be programming 20 years from now. It’s much more difficult to say what we’re going to be programming.

I’m not talking languages and tools – the things you’ll be programming with. I’m not talking about computers, clouds, processors and memory – the things you’ll be programming on or for. We can be sure these things will change, in some ways we can’t imagine, over the course of the next 20 years.

I’m talking about what kinds of systems will you be creating? The same kinds you’re creating today? Really? Twenty years of creating the same kind of software? That sounds pretty boring. It’s also potentially career limiting. Not to mention how do you really improve your craft when you’re recreating the same thing over and over again?

Working in industry verticals or niche markets is OK. But you don’t want to find yourself creating the umpteenth accounting system 20 years from now and trying to console yourself that it’s somehow different because the implementation language is different. Or the IDE you’re using is different. Or the type of machine it runs on is different. It’s not different. It’s the umpteenth accounting system you’ve created.

If you really want to spend an entire career creating software then I have one simple advice: Don’t Repeat Yourself. Don’t create the same system twice.

Posted in career | Tagged , | 1 Comment

The PC is Dead! Long Live the PC!

I bought a 3rd generation iPad the day they were first available. I didn’t camp in line or anything of that sort. I simply went to Best Buy after work, walked up to the counter and selected the model (black, 32 Gb, WiFi) I’d intended to buy. There were 6 remaining. By the time I’d left the store there was 1 remaining, so sales were pretty brisk.

I used the upcoming spring break family vacation to rationalize making the purchase on the first day the new iPad was available (I’d intended to buy one anyway). I also grabbed the Camera Connection Kit and a Smart Cover.

Setup was a breeze since I already have an iPhone. Just spend a few minutes syncing with iTunes and I was good to go. I was pleasantly surprised to find my iPad had 100% charge on its battery. I’d expected to have to charge it first.

I let the kids pick out a few games since I knew there would be quite a bit of driving for this vacation. I was hoping the new iPad would keep them entertained during the drive. Worked like a champ! Not only did it keep the kids entertained, with a minimal amount of fighting over who gets the iPad next, the battery life exceeded my expectations. Seven hours of continuous use playing games and the battery had only been drained down to 20-25%!

Every hotel these days has a free WiFi connection so I never missed not paying the extra $150 for the 4G LTE connection for the iPad. It was nice at night to chill out and everyone tell their friends what they did that day on Facebook. I was responsible for uploading, editing and organizing the day’s photos – that’s why I grabbed the Camera Connection Kit – posting the ‘Photo of the Day’ on Facebook. It was in doing this task that I discovered the iPad will charge your camera. This seems like a pretty useful feature, but I couldn’t find a way to control it. It appears the iPad is going to start charging your camera whether you want it to or not. I could imagine in low battery situations where you may not want it to.

You’ve all heard the new retina display is gorgeous and it’s true. Text looks printed, not displayed. Let me put it this way – it’s easier to read than my Nook which uses E Ink and is a dedicated e-reader. Pictures are fabulous and really made my Nikon CoolPix S8200 shine. You should consider the new iPad for digital picture viewing alone. Nothing compares.

Since I didn’t have another computer with me for the week I got adept at touch typing on the iPad. I still have problems with my left pinky hitting either ‘q’, ‘w’ or even sometimes ‘a’ (ha!) So it isn’t without issue. It’s the auto-correct feature that makes it work so well. While the single-key error rate is a terrible-sounding 20%, especially where the left pinky is conerned, the iPad is able to determine what word you meant to type and automatically insert that word instead. The auto-correct feature is accurate greater than 95% of the time so overall touch typing is quite pleasant on the iPad. And it certainly beats using your thumbs.

To make typing perfect the next iPad needs a haptic keyboard. That way your fingertips can rest on the keys, feel the ridge on the home keys, and detect the push that your depressing that key. You could do some serious typing with such a keyboard. In my opinion that would be worth the purchase price of a new iPad just to get that one new feature!

So far I haven’t told you anything you didn’t already know – the iPad is a consumption device with a gorgeous display that now coupled with the Camera Connection Kit allows you to import/edit/organize your photos. But what else can you do with it? Would you believe programming?

All you need is the Gambit Scheme application, available in the App Store for 99¢! I was reading Paul Graham’s On Lisp on the iPad – remember the iPad is a great e-reader? – when I got to the chapter on continuations. Paul first explores how continuations are used in Scheme before considering how to implement them in Lisp. Using Gambit I was able to follow along just fine creating and running sample applications on my iPad. The Gambit REPL though makes you acutely aware of the keyboard problem. There’s no auto-correct for Scheme source so that 20% single-key error rate starts to grate on your nerves. Otherwise you’re able to create and run Scheme programs on your iPad! (And yes, this is what geeks do when chilling out on vacation – program! We just like to program something different from what we’re doing at work at the time).

Adding to the iPad’s producing cred is the fact Apple has brought out their iLife and iWork application suites for the iPad, priced at $4.99 per app. Adobe has already released Photoshop Touch for the iPad priced at $9.99. Even Microsoft is rumored to be bringing out their Office suite on the iPad later this year, though no pricing is available.

Clearly this all points to the death of the PC, especially for typical home/family usage. So why long live the PC? A colleague recently pointed out you wouldn’t want to do your taxes on an iPad. He’s right – the keyboard would drive you nuts. Likewise a typical knowledge worker in an office environment is not going to want to use an iPad either. Even if it had an awesome haptic keyboard. It’s simply not the right tool for the job. So there’s room for the PC for the foreseeable future, especially in the workplace. But for everywhere else? Watch out! The PC may be going the way of the dodo bird.

Posted in iPad | Tagged | Leave a comment

2012 IT Trends

Preface

It’s 2012 and disruptive technological change is in the air. What are the major drivers? What should we be focusing on? Here’s my ideas. What are yours?

Changing Vendor Landscape

The last five years have seen major changes to the IT vendor landscape. Sun Microsystems is no longer in business. Oracle is no longer just a database company. Google has moved well beyond its search services. Amazon’s book store business is becoming a footnote to their application services. Microsoft is suffering a mid-life crisis. And Apple has become the largest company on Earth.

Not to mention the myriad successful start ups such as Dropbox, GitHub, Yammer, Evernote, Quora and LinkedIn, many of whose services are extremely valuable to enterprise users.

This rapidly changing vendor landscape reinforces the maxim that care should be taken to avoid vendor lock-in. You simply don’t know who’s going to be around 5-10 years from now. Even if they’re a market leading, multibillion-dollar company today.

PaaS (Platform as a Service)

Utility computing and grid computing have manifested themselves as PaaS. While enterprises aren’t eager to outsource their data centers, especially their data, they are eager to utilize PaaS to offload their peak processing cycles and provide disaster recovery services.

PaaS is also being turned inward, the so-called Internal Cloud. Why can’t the management ability of the Amazon Web Services be used to provision internal server resources? Why should it be easier to provision externally hosted servers than an organization’s internally hosted servers?

SaaS (Software as a Service)

Google Docs. Gmail. Salesforce.com. GoToMeeting. Software is expected to be  always available and usable from all manner of devices a user may own. No distribution hassles. No versioning hassles. No update hassles. And keep my data backed-up for me too, please.

Virtual Machine Images

Have dramatically changed workstation  management and provisioning. Machine images are regularly maintained and distributed. New employees/contractors can immediately begin work having all the tools they need.

Employees no longer waste their time installing, configuring and patching software. If your machine heads south, no worries – you can easily recover by reinstalling your image.

Dynamic Languages

  • Javascript
  • Ruby
  • Groovy
  • Clojure
  • Python
  • R
  • Lisp
  • Objective-C

To name the more popular ones, remove the boilerplate code and allow developers to focus on the problem at hand. Still working to overcome their historical perception of being “toy” languages these languages are now finding their way into mainstream solutions.

Functional Languages

  • F#
  • Haskell
  • Clojure
  • Scala
  • Lisp

Will solve the Moore’s Law crisis: we’ve hit a wall for single core processing speed. Greater computing power is being achieved through a greater number of processing cores. Parallel programming provides the means for harnessing the power of all these cores. Functional programming is the only practical means of parallel programming at any large scale. Meaning functional programming is the only practical means of harnessing all the available power of today’s multicore processors.

Polyglot Programming

Applications are no longer a single .exe deployed to a user’s workstation. The client portion is likely to utilize HTML, CSS and JavaScript. The server portion is likely to combine object-oriented paradigms and frameworks along with functional paradigms and frameworks. Still other technologies and tools are often utilized for application integration. Right tool for the right job as they say.

This is quickly becoming the new normal. IT organizations embracing polyglot programming need to actively mange their tool set or they may find themselves unable to support all their deployed technologies.

Distributed Source Control Management

Git and Mercurial are the frontrunners in this area. These tools have redefined developer collaboration and experimentation and are yielding higher creativity and productivity.

Continuous Integration

Is a mainstay in organizations adopting agile methodologies. Code is continually built, tested and so bugs are identified early – while they can still be easily fixed. Build early, build often and fail fast.

Adoption of Agile Methodologies

It’s been eleven years since the publishing of the Agile Manifesto. Even longer since the adoption of eXtreme Programming which introduced the radical idea of pair programming. Yet only recently have they captured the attention of the enterprise.

And capture they have. Stakeholders love seeing software as it’s being built and the flexibility of refining requirements in response to interim deliveries and ever-changing business conditions. Developers love creating software not destined to be shelfware and actually delights their users. Developers also love not following a development process by rote but instead doing what makes sense for their particular project, stakeholders and users. In the end this leads to better/cheaper/faster deliverables. Everybody wins.

Collaboration

People work best when working together. Whether it’s working on a presentation, sharing decision support data, conferencing with one another, or simply instant messaging – people need to work together and they need tools that easily allow them to do so. These tools must also work on all devices and platforms from anywhere in the world.

Collaboration doesn’t end with the enterprise. B2B collaboration is crucial in today’s business environment. In those situations there is no control over the platform, device or location – making it even more important to adhere to open standards to achieve success.

Mobile

More smart phones were sold in 2011 than PCs. Tablets such as the iPad were included in the PC category. The netbook market no longer exists having fallen victim to the mobile revolution. Companies implementing a Bring Your Own (mobile) Device to work policy require their IT shops to manage and integrate these new devices with their existing application portfolios.

The key is flexibility. The mobile landscape is changing so fast that today’s hot seller can be tomorrow’s dust collector. At odds are the fact mobile devices are replaced every two to three years and yet the software developed and deployed for them must be capitalized over a five year period. The resolution for this conflict is to support multiple devices – both existing and yet-to-be-released (or even dreamt about).

HTML5

Flash. Silverlight. Java FX. Going, going, gone. The mobile revolution has forced the abandonment of these technologies and the adoption of HTML5. Mobile Consumer/Enterprise Application Platforms (MCAP/MEAP) use HTML5 technologies to create multi-platform mobile applications.

Microsoft’s upcoming Windows 8 extends this trend by utilizing HTML5 technologies for creating native applications. HTML5 then is being used to create desktop apps, web apps, and mobile apps. Using HTML5 for multi-platform application development is a practical strategy for IT shops.

Web Operating System

The browser is the OS, or really the virtual machine – eventually replacing virtual machines such as the CLR and JVM. HTML5-enabled browsers or run time environments are now available on all platforms. These environments provide and manage nearly all resources traditionally managed by an OS:

  • file
  • caching/memory
  • communications/networking
  • 2D & 3D graphics
  • audio & video

Add to that the ability to invoke and host web services and interact with databases and we need to ask ourselves: what do we need OSes, CLRs and JVMs for anyway? What are the long term prospects for the JVM and CLR? Five years? Ten years, tops?

You say you don’t like JavaScript? You have several language choices including:

  • CoffeeScript
  • ClojureScript
  • Dart
  • Red

And many others. Take a look at this site to see what’s available.

Cloud Storage

Users need their files to be accessible and automatically synchronized across all their devices and easily shared with others. File shares and media storage are rapidly becoming relics of the past. Cloud storage makes this possible.

REST Web Services

Embodying the spirit and functionality of the web, REST web services are rapidly displacing SOAP. Good riddance. SOAP web services are opaque and require out-of-band communication to describe their payloads and how to obtain more information utilizing the retrieved data.

Though REST is certainly not new, it’s only been recently that all the major enterprise development platforms have made it as easy to create REST web services as it has been to create SOAP web services. This new-found ease of creation along with these service’s better alignment with the web philosophy, think HTML5, will result in our seeing a lot more of them in the near future.

3D Printing

Okay, this isn’t an IT trend per se. But it’s really cool because of the power it has to disrupt so many industries. This is one of those perception barriers where we can’t fully fathom what life is going to be like with it, and once we have it we won’t be able to imagine what life must have been like before it. This is electric lightbulb class transformative power!

What Do You Think?

What do you see as being the most important trends for 2012?

Posted in Trends | Tagged , | 6 Comments

On Time Estimating

I needed to replace the kitchen faucet over the weekend. I’ll spare you the details of how long it takes a wife to pick out the faucet she’d like and how many stores it requires to visit in order to find that faucet (or to ensure you really have found the right one). As fun as that side discussion may be, it’s not the point of this post.

This post rather is about time estimation. As in how long is it going to take to remove the old faucet and install the new one.

Now I’ve replaced faucets before. It’s not something I do all the time but it is something needing to be done from time-to-time. I’ve done it enough to have experienced the various kinds of problems one can encounter while doing the job.

Having a very busy family schedule I employed that experience to determine the block of time I was going to need in order to get this new faucet installed. Experience dictated 3 hours would be sufficient. More than enough in fact. So I blocked out the time needed so it wouldn’t interfere with any other activities.

I was relieved to discover there were in fact shut-off valves for the sink. Though they weren’t under the sink. They were in the basement below the sink. At least they weren’t frozen (a common problem) and were able to be easily shut-off. Score!

Likewise the supply lines were simple to remove. No frozen nuts or anything. By this time I was thinking I had seriously overestimated the time it would take to get this faucet out. I was thinking about what beer I was going to enjoy when I finished early.

This was a 3 hole installation and it was one of those deals where there’s a bolt protruding down through the outside holes and a nut, sort of like a wing nut in this case and made of plastic, threaded onto the bolt from the underside. I practically effortlessly removed the nut on the cold water side. Of course I wouldn’t be writing this entry if the same would have been true for the hot water side.

That nut was frozen on. Actually the bolt had rusted and so the nut wasn’t going anywhere. So as turns the nut so turns the bolt. And of course this is in a nearly impossible to reach location anyway, nevermind managing the trick of getting a box wrench on the nut and a pair of vise grips on the bolt to hold it.

The nut being made of plastic turned out to be a drag. Got chewed up, though it still wouldn’t budge. Time to get out the channel locks to try to grip the wings of the nut, while using the vice grips to hold onto the bolt. While I’m positioned under a garbage disposal and plumbing all of which I dearly don’t want to remove just to get this nut out.

About 45 minutes and a liberal amount of WD-40 later, I had that nut off. Ah! Now the faucet will simply lift out!

Not so fast. Turns out there’s a U-shaped bracket on the center piece having a nut fastened to a protuding nipple from the faucet stem. This was a regular old brass hex nut. Rusted. Sigh. Again, 30 minutes and a liberal amount of WD-40 later and I had that nut off too. Now the faucet lifted out.

The new faucet installation was fairly uneventful, until it came time to do the dreaded leak check. That’s when you think you have everything hooked up and you turn the water back on and check if anything, usually the supply lines, are leaking. Remember how I said the shut-offs were located in the basement under the sink? And I live in a ranch house? And the distance from the sink to those shut-offs is the maximal distance possible in the house?

Of course one of the supplies was leaking. No matter how much I tightened it, it would still leak. And of course each leak check involved my walking a quarter mile to and fro those shut-off valves and the sink.

In desperation I decided to remove the offending supply line thinking I’d have to get a new one. Once I had the line removed I noticed that it was missing the black o-ring washer. It was still on the old faucet.

Place washer back where it belongs, re-install supply line, tighten everything up, walk the quarter mile back to the shut-off valve, and voila! Everything was fine. New faucet installed and working perfectly. 5 hours later. For what I thought would be a 3 hour job, tops.

What does any of this have to do with software?

Estimating. The actual time for me to complete the job was 66% over what I had estimated. And I’ve replaced faucets before! If this were an IT project it would have been deemed a failure.

What else would have happened had this been an IT project? We would have had status meetings whereupon we would discover we were falling behind schedule. And then the PM would ask what could be done to get us back on schedule? Do we need to bring in more staff? Hire an outside expert? Buy new tools? Explore new plumbing technologies?

No. We just need let alone to get the job done. We know what we’re doing. We’ve simply run into a snag. A snag you can’t predict beforehand and yet seems to always be there in every job you do. A snag I can’t desribe to you and you can’t comprehend, unless you’ve built software before. A snag defying any estimation of how long it will take to work through. And now we’ve reached the root of the problem.

Posted in Development | Tagged | 1 Comment

View Source on iPhone (bookmarklets)

I recently got an iPhone. But this isn’t the typical post of how great and wonderful an iPhone is, why you should get one, and so on and so forth. This post is much more practical than that.

In the course of using my new iPhone I had the need to view the source of a web page. Doesn’t matter what page it was or why I wanted to view the source. I just needed to.

So there I am, mobile safari running, my web page displayed…and now what? I couldn’t believe there wasn’t any way to view the source. I looked through every mobile safari option to no avail – there is no way to view the source.

To the interwebs! And as you’d imagine, this problem has been solved. The following site explains what you need to do:

View Source for Safari on iPhone

If you normally sync your browser bookmarks to your iPhone then that’s it! You’re done!

But I don’t, for a variety of reasons which we don’t need to go into here. So am I stuck? Of course not! Here’s the steps you need to follow:

  1. Open mobile safari
  2. Navigate to a site, any site
  3. Tap ‘Add Bookmark’
  4. Enter ‘View Source’ for the bookmark name
  5. Tap ‘Done’
  6. Bring up the bookmarks
  7. Tap the ‘Edit’ button
  8. Tap the ‘View Source’ bookmark you just added
  9. Delete the existing URL
  10. Paste in the following javascript:

javascript:var%20sourceWindow%20%3D%20window.open%28%27about%3Ablank%27%29%3B%20%0Avar%20newDoc%20%3D%20sourceWindow.document%3B%20%0AnewDoc.open%28%29%3B%20%0AnewDoc.write%28%27%3Chtml%3E%3Chead%3E%3Ctitle%3ESource%20of%20%27%20%2B%20document.location.href%20%2B%20%27%3C/title%3E%3C/head%3E%3Cbody%3E%3C/body%3E%3C/html%3E%27%29%3B%20%0AnewDoc.close%28%29%3B%20%0Avar%20pre%20%3D%20newDoc.body.appendChild%28newDoc.createElement%28%22pre%22%29%29%3B%20%0Apre.appendChild%28newDoc.createTextNode%28document.documentElement.innerHTML%29%29%3B

Now when you want to view the source of a web page on mobile safari, just bring up your bookmarks and tap the ‘View Source’ bookmark. It’ll open a new page containing the source of the current page. Simple!

On a final note we should appreciate what we just did. We didn’t associate a bookmark with an URL like we usually do, but instead associated the bookmark with a javascript snippet. Such a bookmark is called a bookmarklet.

Now that we’ve learned we can associate javascript snippets with a bookmark and execute that snippet on the contents of the currently-viewed page,  it makes you wonder what all can be done? A la Greasemonkey?

Posted in iPhone | Tagged , | Leave a comment