Double Towel Racks – The Easy Way To Increase Space

Bathrooms almost always pose a storage challenge. They often have minimal space to do all the jobs we need to do in there and to store everything we use while we're in there.

When a bathroom is shared by a couple of people, or even an entire family, unique storage challenges come up, and they require unique solutions. When, say, four people use the same room for bathing, an obvious problem is: where do you put all the towels? Bathroom hardware manufacturers came up with a better solution: double towel racks.

A traditional single towel rack provides sufficient space to dry one towel. If you've got four people using four towels each day, and you have a typical bathroom, you'll need a wall covered with towel rods to provide enough drying space.

Double towel racks provide an innovation solution to this all-too-common bathroom storage problem. You'll find double towel racks come in traditional finishes like polished chrome and polished and antique brass, and popular finishes like brushed nickel and oil-rubbed bronze. You can find economic versions of double towel racks in unfinished wood and ceramic-and-plastic. Regardless of the amount the wall space you have available to install this hardware, you'll find one to fit your space; They come in the range of standard sizes.

If your bathroom is short on storage, you'll usually be open to considering any new space-saving solutions. You can find bathroom suites with a double towel racks installed below. Imagine-in the space where you could normally dry a towel or two, you can double your hanging space, and have room to store a few fresh folded towels and other bathroom essentials, too.

Double towel racks are an excellent solution when you've got lots of damp towels to handle, but other solutions do exist:

• Install a row of pegs or hooks along the wall of the bathroom.
• Install one or more multi-prong hooks on the back of the bathroom door.
• Buy a shower curtain rod with a towel rack incorporated in its design.
• When you purchase shower doors, look for ones where the handles double as towel bars.
• Install suction-cup hooks inside the tub surround.
• Place a swing-arm towel bar to the wall next to your tub or shower. This way, the towel bars extend into the room; They are not limited to hanging against a wall.
• Hang a hook over the bathroom door, linen closet door, or the door of the water closet. These over-door hooks come in single, double, and multiple hook versions in colors and finishes that either stand out or blend in.
• Repurpose an old-style coat rack and use it to hang towels in the bathroom. It takes up only a single square foot of precious floor space.
• If you want to add furniture to your bathroom, look for a hall tree, which is usually reserved for use in the foyer or a mudroom. They come in many styles and finishes, equipped with hooks, mirrors, storage benches, and shelves.
• If you've fortified enough to have a sizable linen closet in your bathroom, visit the closet organization section of your home improvement store. These stores have trained personnel who can help you look at the space you have and redesign it to suit your needs.

Installing a couple of double towel racks can provide a simple way to add storage space to your bathroom. But investigate all the possible storage options for your unique bathroom design challenges. You're not limited to one solution-think creatively and combine them to make a bathroom that works for you.

Posted in general | Comments Off on Double Towel Racks – The Easy Way To Increase Space

Why Do We Need Software Engineering?

To understand the necessity for software engineering, we must pause briefly to look back at the recent history of computing. This history will help us to understand the problems that started to become obvious in the late sixties and early seventies, and the solutions that have led to the creation of the field of software engineering. These problems were referred to by some as “The software Crisis,” so named for the symptoms of the problem. The situation might also been called “The Complexity Barrier,” so named for the primary cause of the problems. Some refer to the software crisis in the past tense. The crisis is far from over, but thanks to the development of many new techniques that are now included under the title of software engineering, we have made and are continuing to make progress.

In the early days of computing the primary concern was with building or acquiring the hardware. Software was almost expected to take care of itself. The consensus held that “hardware” is “hard” to change, while “software” is “soft,” or easy to change. According, most people in the industry carefully planned hardware development but gave considerably less forethought to the software. If the software didn’t work, they believed, it would be easy enough to change it until it did work. In that case, why make the effort to plan?

The cost of software amounted to such a small fraction of the cost of the hardware that no one considered it very important to manage its development. Everyone, however, saw the importance of producing programs that were efficient and ran fast because this saved time on the expensive hardware. People time was assumed to save machine time. Making the people process efficient received little priority.

This approach proved satisfactory in the early days of computing, when the software was simple. However, as computing matured, programs became more complex and projects grew larger whereas programs had since been routinely specified, written, operated, and maintained all by the same person, programs began to be developed by teams of programmers to meet someone else’s expectations.

Individual effort gave way to team effort. Communication and coordination which once went on within the head of one person had to occur between the heads of many persons, making the whole process very much more complicated. As a result, communication, management, planning and documentation became critical.

Consider this analogy: a carpenter might work alone to build a simple house for himself or herself without more than a general concept of a plan. He or she could work things out or make adjustments as the work progressed. That’s how early programs were written. But if the home is more elaborate, or if it is built for someone else, the carpenter has to plan more carefully how the house is to be built. Plans need to be reviewed with the future owner before construction starts. And if the house is to be built by many carpenters, the whole project certainly has to be planned before work starts so that as one carpenter builds one part of the house, another is not building the other side of a different house. Scheduling becomes a key element so that cement contractors pour the basement walls before the carpenters start the framing. As the house becomes more complex and more people’s work has to be coordinated, blueprints and management plans are required.

As programs became more complex, the early methods used to make blueprints (flowcharts) were no longer satisfactory to represent this greater complexity. And thus it became difficult for one person who needed a program written to convey to another person, the programmer, just what was wanted, or for programmers to convey to each other what they were doing. In fact, without better methods of representation it became difficult for even one programmer to keep track of what he or she is doing.

The times required to write programs and their costs began to exceed to all estimates. It was not unusual for systems to cost more than twice what had been estimated and to take weeks, months or years longer than expected to complete. The systems turned over to the client frequently did not work correctly because the money or time had run out before the programs could be made to work as originally intended. Or the program was so complex that every attempt to fix a problem produced more problems than it fixed. As clients finally saw what they were getting, they often changed their minds about what they wanted. At least one very large military software systems project costing several hundred million dollars was abandoned because it could never be made to work properly.

The quality of programs also became a big concern. As computers and their programs were used for more vital tasks, like monitoring life support equipment, program quality took on new meaning. Since we had increased our dependency on computers and in many cases could no longer get along without them, we discovered how important it is that they work correctly.

Making a change within a complex program turned out to be very expensive. Often even to get the program to do something slightly different was so hard that it was easier to throw out the old program and start over. This, of course, was costly. Part of the evolution in the software engineering approach was learning to develop systems that are built well enough the first time so that simple changes can be made easily.

At the same time, hardware was growing ever less expensive. Tubes were replaced by transistors and transistors were replaced by integrated circuits until micro computers costing less than three thousand dollars have become several million dollars. As an indication of how fast change was occurring, the cost of a given amount of computing decreases by one half every two years. Given this realignment, the times and costs to develop the software were no longer so small, compared to the hardware, that they could be ignored.

As the cost of hardware plummeted, software continued to be written by humans, whose wages were rising. The savings from productivity improvements in software development from the use of assemblers, compilers, and data base management systems did not proceed as rapidly as the savings in hardware costs. Indeed, today software costs not only can no longer be ignored, they have become larger than the hardware costs. Some current developments, such as nonprocedural (fourth generation) languages and the use of artificial intelligence (fifth generation), show promise of increasing software development productivity, but we are only beginning to see their potential.

Another problem was that in the past programs were often before it was fully understood what the program needed to do. Once the program had been written, the client began to express dissatisfaction. And if the client is dissatisfied, ultimately the producer, too, was unhappy. As time went by software developers learned to lay out with paper and pencil exactly what they intended to do before starting. Then they could review the plans with the client to see if they met the client’s expectations. It is simpler and less expensive to make changes to this paper-and-pencil version than to make them after the system has been built. Using good planning makes it less likely that changes will have to be made once the program is finished.

Unfortunately, until several years ago no good method of representation existed to describe satisfactorily systems as complex as those that are being developed today. The only good representation of what the product will look like was the finished product itself. Developers could not show clients what they were planning. And clients could not see whether what the software was what they wanted until it was finally built. Then it was too expensive to change.

Again, consider the analogy of building construction. An architect can draw a floor plan. The client can usually gain some understanding of what the architect has planned and give feed back as to whether it is appropriate. Floor plans are reasonably easy for the layperson to understand because most people are familiar with the drawings representing geometrical objects. The architect and the client share common concepts about space and geometry. But the software engineer must represent for the client a system involving logic and information processing. Since they do not already have a language of common concepts, the software engineer must teach a new language to the client before they can communicate.

Moreover, it is important that this language be simple so it can be learned quickly.

Posted in general | Comments Off on Why Do We Need Software Engineering?

Review of Takeoff Software for Estimating Construction

So often people want to rush out and buy estimating software or takeoff software without first trying to define their internal estimating processes. Once the estimating process is clearly defined, then and only then, can you actually try to compartmentalize the process into segments. So often the segment is really quantity takeoff. Takeoff of what you may wonder? That is like the million dollar question. This article will speak about the takeoff software process which usually associated with estimating software processes. The takeoff software process can often be takeoff of materials for some folks, and to many others, the takeoff process of scoped systems to create estimates or proposals. This review or comparison will not try to explain the estimating software process but bring to you valid quantity takeoff thinking among estimators in a quest to find which product thinks the way you do. These are the opinions of the author.

I will review and compare 3 types of measuring takeoff products:

It is extremely important to note that these are ONLY measuring takeoff programs, NOT estimating programs.

1) Planswift

2) On-Screen Takeoff by On Center Software

3) Electronic Plan Takeoff Software

All three products have their strengths, however, Planswift and On-Screen Takeoff are stand-alone products and Electronic Plan Takeoff is actually dynamically integrated live with Microsoft Excel which means that it starts and finishes and saves in Excel. They all integrate with Excel, however, you will have to evaluate your thought process and decide which of the three products work-flow think along the lines of how you think. For instance, what is the first thing you do when you get a set of plans? Typically, you start flipping through the plans to see how involved the project is and what type of work do you see that is attractive for your company. Then when you decide you are going to estimate this job, more often than not, you start like 80% of companies in the world of construction estimating by opening your takeoff master template Excel spreadsheet. You rename your spreadsheet to the new job or project and off you go performing takeoff. This is where the differences are:

In Planswift, you decide what drawing you are on and then you perform the measuring of an item you want to perform takeoff on the plan. Unfortunately, that is not exactly how an estimator thinks. Planswift does give you the ability to add a type of takeoff item on the fly by naming it and then perform takeoff of it; somewhat of a very manual and slow process. They also provide you with the ability of applying a type of assembly to a takeoff to aggregate quantities of items in that assembly. Not quite the way an estimator thinks. It forces you to jump to different screens which slows down the process. Typically, the main start of anyone’s takeoff process, or some may think of it as a checklist approach, is to start with your own spreadsheet of YOUR items. Those items can be material items or can be scoped assembly system items. Either way, by starting with a master spreadsheet say in Excel for example, many estimators think of this as a risk reducer, not to forget things they normally takeoff. Being that Planswift is a stand-alone takeoff program, it typically saves your takeoff images in Planswift instead of your estimate in Excel, if Excel is your estimating system. If you are using Excel, you have to manually save your takeoff measurement numbers in Excel and your takeoff images in Planswift or elsewhere, just not in Excel where the takeoff quantity resides. Again, if you want to integrate with Excel, they force you to either export or import takeoff items from Excel rather than being dynamically integrated live to Excel. They do however have the ability to dump the measured quantity from Planswift into any Excel spreadsheet or Word document. The main purpose or the primary focus of this program is measuring, therefore it does a good job at that function. Most of the other functions require you to jump around different screens, and essentially, you loose your thought of where you are. There are some features that attempt to address the estimating process, however, there are many features that are missing for Planswift to be a full fledged estimating system; it is NOT one. Planswift does integrate with the leading estimating system Sage Timberline, but the integration is weak. Since Timberline’s power is in assembly takeoff and where most estimators reside in Timberline, Planswift does not give the estimator the ability to add quantities of miscellaneous Timberline items or one-time items that need to added on the fly to an assembly while they are in Planswift at the Timberline interview screen, and while being in the measuring phase. Planswift does allow the deleting of assembly generated items as well adjusting assembly item quantities in a different screen. Again, to perform all that, you are forced to jump around to different screens. No assembly is ever perfect in any estimating system since project conditions are always uniquely different, therefore, having to add items to an assembly is extremely important. That adding of items and associated quantities is an absolute requirement any estimator typically has to do during the takeoff measuring and estimating phase; something that Planswift struggles with as related to Timberline Estimating. Planswift does allow the direct send of measurements to Timberline Estimating items and assemblies while in Timberline Estimating, just as you would do with the old digitizer measuring boards. Training, support and maintenance are extra for Planswift. On-Screen Takeoff by On Center Software, and Planswift charge their annual maintenance and support fees per license (mandatory) which costs the end user more expense annually especially if a customer has more than one license.

On Center’s On-Screen Takeoff is the Grand Daddy of software takeoff products due to the fact that it has been around the longest. On Center recognizes that On-Screen Takeoff is primarily a measuring program. That is why they have a separate estimating program named QuickBid for those who want an estimating program. On Center does not try to trick you into thinking it is an estimating system. In On-Screen Takeoff, you also decide what drawing you are on and then you perform the measuring of the plan. BUT, before you start, you can load a master set styles of things you typically takeoff or measure from your own library. That process seems to be less complicated than that of Planswift. On-Screen Takeoff does give you the ability to add a type of takeoff item on the fly by naming it and then performing takeoff of it; somewhat of a manual and slow process as well. The program does come with many features that are primarily focused on simple measuring to advanced measuring issues all with attention to detail regarding easy navigation for the takeoff process. On Center does a very good job at that. However, there seems to be a disconnect of thought from an Excel spreadsheet items you may use for estimating and/or proposals. The integration to Microsoft Excel is not a dynamic live link, more like an after thought in my opinion. Yes, you can establish links to named styles to cells or ranges in Excel, somewhat rigid. But the question you will have to ask yourself, which will happen more often than not is: What do you do when you need to add things on the fly during takeoff and in an Excel spreadsheet? Again there will be manual associations you will have to establish with Excel which is another major slowdown. You have to manually save your takeoff measurement numbers in Excel and your takeoff images in On-Screen or anywhere you decide, except the takeoff images will not be saved in Excel where the takeoff quantity resides. This type of situation arises when a takeoff program is a stand-alone program. On Center’s On-Screen Takeoff has the best integration with the most widely used estimating system in the USA: Sage Timberline Estimating. It basically mimics the same interview process as you would do with the old digitizer measuring boards. By working directly with Timberline, On-Screen Takeoff allows the estimator to perform takeoff of a Timberline variable question and immediately returns back directly with the takeoff quantity in a Timberline assembly at the variable question. By virtue of this process, On-Screen Takeoff allows the estimator to continue his/her Timberline interview process in Sage Timberline Estimating by reviewing/massaging generated quantities, or adding items in a Timberline assembly as the estimator see fit. That workflow process gives full control to the estimator, good job On Center. Training, support and maintenance are extra for On-Screen Takeoff. On-Screen Takeoff by On Center Software, and Planswift charge their annual maintenance and support fees per license (mandatory) which costs the end user more expense annually especially if a customer has more than one license.

This next system is ONLY if your estimating system or proposal generator is Microsoft Excel. Electronic Plan Takeoff Software is a plug-in for Excel. You start your spreadsheet, you perform the measuring takeoff, you may even add some more items on the fly all the while you are in the measuring phase in the Electronic Plan Takeoff program. When you are done, even if you added items on the fly, they automatically appear in your Excel spreadsheet. Excel is the control of everything. Your project is started in Excel, your takeoff is saved in Excel, the estimate or proposal is/can be produced there in Excel; one program, one place. Many takeoff programs interface with Excel somehow, but only Electronic Plan Takeoff is live linked with Excel, meaning all your Excel spreadsheet descriptions appear in the measuring takeoff program so you always know where you are in Excel. That is a huge difference in comparison to Planswift and On-Screen Takeoff. You can even change a description of a takeoff item in Electronic Plan Takeoff and it is automatically changed live, in your Excel spreadsheet. When you talk about the estimating and takeoff phase you must keep processes cleans and easy and this program does just that. There is no getting lost in this program. Just like the other reviewed programs above, the central focus of this program is takeoff measuring, and it does a GREAT job at that. The navigation within the program is really simple and easy. It is not made to work with other estimating systems, but there is a version that allows the direct send of measurements to any Microsoft Windows program awaiting a keyboard entry, just as you would do with digitizer measuring boards. There is also a version that works with digitizer boards as well. If you use Microsoft Excel for estimating, or takeoffs, or proposals, then this Electronic Plan Takeoff program for Excel would be your best choice. The integration to Excel is unmatched in Electronic Plan Takeoff compared to Planswift or On-Screen Takeoff. What is quite different in Electronic Plan Takeoff is that training, support, and maintenance are INCLUDED with a purchase, whereas training, support and maintenance are extra for Planswift and On-Screen Takeoff. Moreover, annual support and maintenance for Electronic Plan Takeoff year two and beyond is a low fee per company per year, instead of per license. On-Screen Takeoff by On Center Software, and Planswift charge their annual maintenance and support fees per license (mandatory) which costs the end user more expense annually especially if a customer has more than one license.

Microsoft and Excel are registered trademarks of Microsoft Corporation. Planswift is the registered trademark of Tech Unlimited, Inc. On-Screen Takeoff and QuickBid are registered trademarks of On Center Software, Inc. Sage Timberline Office, Sage Timberline Estimating are registered trademarks of Sage Software, Inc.

Posted in general | Comments Off on Review of Takeoff Software for Estimating Construction

Corel DRAW – Best Desktop Publishing Software

Corel DRAW is a supreme supplier of graphics software, including the popular Corel DRAW program. Corel DRAW has tools that allow the user to both create and edit images. The type of desktop publishing tools that you use will depend on the type of project. For more information and assistance, use the Corel website.

Corel DRAW is the best Desktop publishing software that empowers users to create illustrations containing graphics, text and photographs. Corel has an extensive range of tools which enable the user to edit any shape or character with ease and precision, fit text to curves and create custom color separations. It is developed and marketed by Corporation of Ottawa. This tool can open files: Adobe PageMaker, Microsoft Publisher and Word, and other programs can print documents to Adobe PDF using the Writer printer driver, which such software can then open and edit every aspect of the original layout and design.

Several innovations to vector-based illustration originated with Corel: a node-edit tool that operates differently on different objects, fit text-to-path, stroke-before-fill, quick fill/stroke color selection palettes, perspective projections, mesh fills and complex gradient fills.

One of this software’s many strengths is the huge range of over 1,000 fonts that it comes with, provided in both TrueType and Postscript Type 1 format. Corel differentiates itself from its opponent in a number of ways: The first is its positioning as a graphics suite, rather than just a vector graphics program. A full range of editing tools allow the user to adjust contrast, color balance, change the format from RGB to CMYK, add special effects such as vignettes and special borders to bitmaps. Bitmaps can also be edited more extensively using Corel PhotoPaint, opening the bitmap directly from Corel and returning to the program after saving. It also allows a laser to cut out any drawings.

Expert believed it was the first of the Windows-based drawing programs and has built on this early start to become far-and-away the dominant drawing package on the PC. Its biggest strength – and its biggest potential limitation – is its all-encompassing approach. In the past this has led to accusations of unfocused bloating, but with version 7.0 Corel has addressed the criticisms with a far tighter and better rationalized program. Even so, there’s a huge range of functionality to cover.

Corel DRAW Download was originally developed for Microsoft Windows and currently runs on Windows XP, Windows Vista, and Windows 7. The current version, X5, was released on 23 February 2010.

Posted in general | Comments Off on Corel DRAW – Best Desktop Publishing Software

Advantages to Android Game Development

The industry of mobile game development has introduced a very important aspect to the market – the ability to conceptualize, develop, and release video games on devices with far more success and ease than ever before. And with the Android app marketplace only requiring a one-time fee for submitting an application, the cost becomes almost negligible to put the product out for millions of customers to find. Even the submission process is drastically shorter than on most other smartphones, as the app regulation is far more lenient for the Android OS.

Another drawing point for developing games on Android devices is the programming language featured- Java. Java has long been one of the most popular programming languages ​​for video game developers, and that makes it extremely easy for the average programmer to pick up Android development for the first time. Compared to most other mobile platforms, which usually sport modified or newly invented languages, the learning curve is decreed to nearly nothing, so a new developer can complete a game in a fraction of the time.

Another unique aspect to Android game development is the lack of standardization in the droid phone family. As the Android OS is not licensed to a single mobile phone making company, the phones themselves can vary to an extreme degree in terms of features and hardware specifications. While one device may have a fully functional A-GPS and HDMI video compatibility, another may have a QWERTY keyboard and no GPS at all. While this is certainly appealing to some developers, as they are likely to find a phone that will meet their hardware needs reasonably, it will also restrict the potential audience, as some phones will not be able to support the more complex applications.

When the game development process has finally reached the point where it can be released to the public, the developer is presented with yet another choice- which market would the game be most visible in? Unlike the iOS, there are numerous marketplaces and app stores for Android phones, each one with it's own advantages and disadvantages. From the basic Android marketplace, built to only display the apps compatible with the phone currently being used, to the Amazon app store, which offers a different free app every day, the myriad of marketing strategies can be almost daunting, which makes it all the More useful that an application can almost always be entered into multiple marketplaces without issue. However, whenever it makes sense to spread attention across several different fields is another question entirely.

The Android game development process overall really gives the most variety on the smartphone market. From start to finish, strategies can be hand-tailored to the developer's desires, making the game as close to the original concept as currently possible. While the audience may not be as large as that of iPhone users, the Android presents itself as a strong contender, purely through its accessibility. And with the largest variety of smartphones on the current market, the possibilities for development are inexhaustible, and continued releases can only add to the capabilities the platform has to offer.

Posted in general | Comments Off on Advantages to Android Game Development

Turn Your Basement Into a Virtual Shooting Gallery

An indoor shooting simulator is easy to add on to most projection based home theater systems, and in most cases is an inexpensive way to add hours of entertainment for the whole family. People of all ages enjoy playing the wide range of games that are available for the system, everything from “Baseball Challenge” to “Elephant Hunter” will keep your family and friends entertained. Utilizing a shooting simulator is not only a great way to add excitement to your home theater room; it is also a get way to keep your shooting skills sharp.

System Basics:

There are a few basic requirements for adding a shooting simulator to an existing home theater. The simulator runs on a normal Windows based computer, software is compatible with Windows XP, Windows Vista and Windows 7. The image is broadcast through a projector to a screen, which most projectors and home theater screens will be suitable for use with this simulator. Now all you need to add is a basic simulator package, which includes a rifle, case, camera and five games. Installation of the simulator will only take about thirty minutes to setup and install the new software and hardware. Now you are ready to start enjoying the very best of simulated shooting. To recap the items you need: computer, projector, screen and a simulator package.

Benefits of Indoor Shooting:

There are many advantages to adding an indoor shooting simulator to you home theater room, these are just a few.

Convenience- having the ability to practice your shooting skills from within your own house, cuts down on drive time to the range and you can fire up your system anytime you want.

Cost Savings- ammunition is expensive! You will save a lot of money practicing your skills using a true to life replica laser firearm verses using live ammo.

Safety- using a laser firearm is a much safer weapon to practice will and it’s a lot better for your hearing.

Shooting Variety- with a shooting simulator you have the ability to practice your skills on a wide range of software titles. You can practice shooting skeet and with just a touch of a button you can switch over to another game and practice your marksmanship on simulated popup targets.

Entertainment- Gather you friends and family, challenge them for the highest score or for bragging rights.

Packages and Software:

With this system, there are many packages of both hardware and software available. Looking for a portable package or maybe a complete package if you don’t have a projector, computer and screen? Those packages and more are available. There are over 35 software titles currently available, which can be purchased separately or in 15 game packages. Software titles are being added, so you will always have the option to buy the latest games on the market. Do you have the best Halloween party on your block? There is a Halloween software package that will insure your party is unforgettable. Do you have a young hunter or marksman that could benefit from “Hunter’s Education” software? It is an option on this simulator. Teach them everything from ethical shooting to animal anatomy, with the hunter’s Ed package. Looking to hone your archery skills? This simulator has packages available for you bow enthusiasts. There are several optional firearms which can be added to the system, to maximize the skill development and enjoyment of the simulator.

Adding a shooting simulator to your theater room is easy and a cost effective way of increasing the entertainment value of your room as well as improve shooting skills. If you would like some more information on the shooting simulators or have any questions please contact me through the website.

Posted in general | Comments Off on Turn Your Basement Into a Virtual Shooting Gallery

Mortgage Loan Origination Software – 10 Functions of Mortgage Banking

Regardless of a mortgage lending organizations’ size, mortgage loan software, data security solutions and automation tools and services should be able to assist with mortgage loan automation requirements. In today’s chaotic mortgage lending environment origination and document security systems need to be easily configured to emphasize a company’s special needs and increase efficiencies across all aspects of the loan origination process, allowing lenders to increase quality and productivity.

Technology-driven automation is the key to succeeding in the increasingly complex, deeply scrutinized mortgage industry. Web-based (Software-as-a-Service), Enterprise mortgage software that supports the ten primary functions in mortgage banking will provide lenders with the necessary competitive advantages to succeed in today’s mortgage industry.

Ten Primary Functions in Mortgage Banking

  1. Mortgage Web site design, implementation, and hosting to provide product, service, loan status, and company information to mortgage customers and business partners
  2. Online loan applications for gathering information from borrowers and business partners that issue loan terms, disclosures, and underwriting conditions
  3. Loan origination software for managing loan data, borrower data, property data, general status reporting, and calculations
  4. Interface systems to send and receive data from real estate service providers, such as credit reports, flood determinations, automated underwriting, fraud detection, and closing documents
  5. Internal automated underwriting system that is simple enough for originators and sophisticated enough for underwriting portfolio loan products
  6. Document generation for applications, upfront disclosures, business processes, and closing documents
  7. Integrated imaging that is used from loan origination to investor delivery and for file archiving
  8. Interest rate and fee generation along with program qualification guidelines
  9. Secondary marketing data tools to track loan revenue and investor relationships, including warehouse line management and interim servicing to complete the back-office system
  10. Reporting such as loan delivery, year-end fee reporting, and HMDA reporting for loan application disposition

Web-Based, enterprise mortgage software that supports the ten primary functions of mortgage banking simplifies compliance, maximizes operational efficiencies, and increases profitability.

Posted in general | Comments Off on Mortgage Loan Origination Software – 10 Functions of Mortgage Banking

The Advantages and Disadvantages to Bug Tracking Software

Bug tracking has been around as early as the 1940’s, just not in a software form. In these early days, simply using a pen and a paper created tracking systems. It evolved from then to using spreadsheets. Now there is bug tracking software like the defect tracking tool and even more specific programs like Mantis and Bugzilla, just to name a couple. As with anything that evolves however, there will always be those that are 100% for the programs and those that are against it. This article will cover all claims – both positive and negative – of bug tracking software like the defect tracking tool.

The Positive Claims

It certainly depends on the type of bug tracking software that is used, but it seems as if there are many more advantages to these tools than disadvantages. The most obvious advantage is that these types of tools allow companies to keep a record of the issues that are recorded, who fixed them, and even how long it took to fix the issue for some types of programs. Customers are encouraged to be as detailed as they can be when requesting that an issue be fixed so that companies can complete their requests as quickly as possible. The fact that the issues are recorded and saved is a huge benefit for the companies because sending the recorded bug list with the purchased software is a common practice. This is a benefit to customers because if it is a common error, they can simply look up this issue in the previously recorded bug list. However, if the list is incredibly long (a common disadvantage) it can become more of a hassle.

The Negative Claims

As with anything that has a list of positive aspects there is also a list of negative aspects, though there are few. One of the biggest complaints is not so much from the bug tracking software or defect tracking tool itself but more from the process of submitting issue requests. Customers need to be extremely detailed with their issue requests if they want a detailed response. Miscommunication isn’t a fault of the product, the customer, or the company – it’s simply something that happens. Customers and companies alike both need to remember to be patient with each other and to treat each other with a mutual respect. A second complaint that was previously mentioned is the length of issues in some of these software programs.

Some customers don’t have the patience to look through a long list of software issues that have been previously recorded and this causes frustration among the companies that took the time to purchase software that saves them. The length of issues that are submitted can also become a problem because if there are too many issues submitted and not enough engineers to address them, some can get overlooked. Nobody likes to be forgotten, but usually these types of bug tracking software include detailed instructions and are easy to use.

Usually when a company purchases a bug tracking software or defect tracking tool it already has an experienced IT department in place. Whatever the software is that is being used with these programs should have some sort of backup for when the work is completed so it does not get lost if the issues that occur are deadly.

Posted in general | Comments Off on The Advantages and Disadvantages to Bug Tracking Software

Choosing the Right SDLC For Your Project

Choosing the right SDLC (Software Development Lifecycle) methodology for your project is as important to the success of the project as the implementation of any project management best practices. Choose the wrong software methodology and you will add time to the development cycle. Adding extra time to the development cycle will increase your budget and very likely prevent you from delivering the project on time.

Choosing the wrong methodology can also hamper your effective management of the project and may also interfere with the delivery of some of the project’s goals and objectives. Software development methodologies are another tool in the development shop’s tool inventory, much like your project management best practices are tools in your project manager’s tool kit. You wouldn’t choose a chainsaw to finish the edges on your kitchen cabinet doors because you know you wouldn’t get the results you want. Choose your software methodology carefully to avoid spoiling your project results.

I realize that not every project manager can choose the software methodology they will use on every project. Your organization may have invested heavily in the software methodology and supporting tools used to develop their software. There’s not much you can do in this case. Your organization won’t look favorably on a request to cast aside a methodology and tools they’ve spent thousands of dollars on because you recommend a different methodology for your project. We’ll give you some tips on how to tailor some of the methodologies to better fit with your project requirements later in this article. In the meantime, before your organization invests in software development methodologies you, or your PMO, ought to be consulted so that at least a majority of projects are benefited from a good fit.

This article won’t cover every SDLC out there but we will attempt to cover the most popular ones.

Scrum

Scrum is a name rather than an acronym (which is why I haven’t capitalized the letters), although some users have created acronyms, and is commonly used together with agile software development. Scrum is typically chosen because of its iterative nature and its ability to deliver working software quickly. It is chosen to develop new products for those reasons. There is typically no role for a project manager in this methodology, the 3 key roles are: the scrum master (replacing the project manager), the product owner, and the team who design and build the system. There is only one role that you would be asked to play if your organization is committed to using this methodology, scrum master. If you should determine that this would actually be the best methodology for your project, you’ll have to re-examine your role as project manager. You can either identify a suitable scrum master and return to the bench, or fill the role of scrum master.

Scrum suits software development projects where its important for the project to deliver working software quickly. Scrum is an iterative methodology and uses cycles called sprints, to build a working system. Requirements are captured in a “backlog” and a set of requirements is chosen with the help of the product manager. Requirements are chosen based on 2 criteria: the requirement takes priority over others left in the backlog and the set of requirements chosen will build a functioning system.

During the sprint, which can last from 2 to 4 weeks maximum, no changes can be made to the requirements in the sprint. This is one of the reasons that a project manager isn’t necessary for this methodology. There is no need for requirements management because no changes are allowed to the requirements under development. All changes must occur in the requirements set in the backlog.

Scrum will be suitable for software development projects where the product is a new software product. By new I mean that it is new to the organization undertaking the project, not in general. The methodology was developed to address a need for a method to build software when its necessary to learn on the fly, not all requirements are known to the organization and the focus is on delivering a working prototype quickly to demonstrate capabilities. You need to be careful when choosing requirements to deliver in each sprint to ensure that the set developed builds a software system that is capable of demonstrating the feature set supporting the requirements included.

You also need to ensure that these requirements are well known and understood as no changes are allowed once the sprint starts. This means that any changes to the requirements must come through a new set of requirements in the backlog making changes to these requirements very expensive.

This methodology divides stakeholders into 2 groups: pigs and chickens. The inventors of this methodology chose this analogy based on the story of the pig and the chicken – it goes something like this. A pig and a chicken were walking down the road one morning and happened to notice some poor children who looked like they hadn’t eaten for days. The compassionate chicken said to the pig: “Why don’t we make those children a breakfast of ham and eggs?” The pig said: “I’m not happy with your suggestion. You’re just involved in making the breakfast, I’m totally committed!” The point to this is the product owner, scrum master, and team are all in the “pig” group. All others are in the “chicken” group. You will be in the “chicken” group if you choose the Scrum methodology as a project manager.

Waterfall

Waterfall methodology calls for each phase of the development cycle to be repeated once only. Requirements will be gathered and translated into functional specifications once, functional specifications will be translated to design once, designs will be built into software components once and the components will be tested once. The advantage of this methodology is its focus. You can concentrate the effort of all your analysts on producing functional specifications during one period rather than have the effort dispersed throughout the entire project. Focusing your resources in this way also reduces the window during which resources will be required. Programmers will not be engaged until all the functional specifications have been written and approved.

The disadvantage of this approach is its inability to teach the project team anything during the project. A key difference between the waterfall approach and an iterative methodology, such as Scrum or RUP, is the opportunity to learn lessons from the current iteration which will improve the team’s effectiveness with the next iteration. The waterfall methodology is an ideal methodology to use when the project team has built software systems very similar to the one your project is to deliver and has nothing to learn from development that would improve their performance. A good example of a project which would benefit from the waterfall methodology is a project to add functionality to a system the project team built in the not too distant past. Another example of an environment that is well suited to the waterfall methodology is a program to maintain a software system where a project is scheduled for specific periods to enhance the system. For example, an order and configuration software system which is enhanced every 4 months.

The waterfall methodology does not lend itself particularly well to projects where the requirements are not clearly understood at the outset. Iterative approaches allow the product owners or user community to examine the result of building a sub-set of requirements. Exercising the sub-set of requirements in the iteration’s build may cause the product owners or user community to re-examine those requirements or requirements to be built. You won’t have that opportunity with the waterfall method so you need to be certain of your requirements before you begin the build phase. Interpreting requirements into functionality is not the only aspect of development that can benefit from an iterative approach. Designing the system and building it can also benefit from doing these activities iteratively. You should use the waterfall method when your team is familiar with the system being developed and the tools used to develop it. You should avoid using it when developing a system for the first time or using a completely new set of tools to develop the system.

RUP

The Rational Unified Process, or RUP, combines an iterative approach with use cases to govern system development. RUP is a methodology supported by IBM and IBM provides tools (e.g. Rational Rose) that support the methodology. RUP divides the project into 4 phases:

1. Inception phase – produces requirements, business case, and high level use cases

2.Elaboration phase – produces refined use cases, architecture, a refined risk list, a refined business case, and a project plan

3. Construction phase – produces the system

4. Transition phase – transitions the system from development to production

RUP also defines 9 disciplines: 6 engineering disciplines, and 3 supporting disciplines: Configuration and Change Management, Project Management, and environment so is intended to work hand in hand with project management best practices.

Iteration is not limited to a specific project phase – it may even be used to govern the inception phase, but is most applicable to the construction phase. The project manager is responsible for an overall project plan which defines the deliverables for each phase, and a detailed iteration plan which manages the deliverables and tasks belonging to each phase. The purpose of the iterations is to better identify risks and mitigate them.

RUP is essentially a cross between Scrum and waterfall in that it only applies an iterative approach to project phases where the most benefit can be derived from it. RUP also emphasizes the architecture of the system being built. The strengths of RUP are its adaptability to different types of projects. You could simulate some of the aspects of a Scrum method by making all 4 phases iterative, or you could simulate the waterfall method by choosing to avoid iterations altogether. RUP will be especially useful to you when you have some familiarity with the technology but need the help of Use Cases to help clarify your requirements. Use Cases can be combined with storyboarding when you are developing a software system with a user interface to simulate the interaction between the user and the system. Avoid using RUP where your team is very familiar with the technology and the system being developed and your product owners and users don’t need use cases to help clarify their requirements.

RUP is one of those methodologies that your organization is very likely to have invested heavily in. If that’s your situation, you probably don’t have the authority to select another methodology but you can tailor RUP to suit your project. Use iterations to eliminate risks and unknowns that stem from your team’s unfamiliarity with the technology or the system, or eliminate iterations where you would otherwise use the waterfall method.

JAD

Joint Application Development, or JAD, is another methodology developed by IBM. It’s main focus is on the capture and interpretation of requirements but can be used to manage that phase in other methodologies such as waterfall. JAD gathers participants in a room to articulate and clarify requirements for the system. The project manager is required for the workshop to provide background information on the project’s goals, objectives, and system requirements. The workshop also requires a facilitator, a scribe to capture requirements, participants who contribute requirements, and members of the development team whose purpose is to observe.

JAD can be used to quickly clarify and refine requirements because all the players are gathered in one room. Your developers can avert misunderstandings or ambiguities in requirements by questioning the participants. This method can be used with just about any software methodology. Avoid using it where the organization’s needs are not clearly understood or on large, complex projects.

RAD

RAD is an acronym for Rapid Application Development uses an iterative approach and prototyping to speed application development. Prototyping begins by building the data models and business process models that will define the software application. The prototypes are used to verify and refine the business and data models in an iterative cycle until a data model and software design are refined enough to begin construction.

The purpose of RAD is to enable development teams to create and deploy software systems in a relatively short period of time. It does this in part by replacing the traditional methods of requirements gathering, analysis, and design with prototyping and modeling, the prototyping and modeling allow the team to prove the application components faster than traditional methods such as waterfall. The advantage of this method is it facilitates rapid development by eliminating design overhead. It’s disadvantage is that in eliminating design overhead it also eliminates much of the safety net which prevents requirements from being improperly interpreted or missed altogether.

RAD is suitable for projects where the requirements are fairly well known in advance and the data is either an industry or business standard, or already in existence in the organization. It is also suitable for a small development team, or a project where the system can be broken down into individual applications that require small teams. RAD is not suitable for large, complex projects or projects where the requirements are not well understood.

LSD

Lean Software Development, or LSD, applies the principles of waste reduction from the manufacturing world to the business of developing software. The goal of LSD is to produce software in 1/3 the time, on 1/3 the budget, and with 1/3 the defects of comparable methods. Lean does this by applying 7 principles to the endeavor of software development:

1. Eliminate waste

2. Amplify Learning (both technical and business)

3. Decide on requirements as late as possible

4. Deliver as fast as possible

5. Empower the team

6. Build integrity

7. See the whole

Although Lean Manufacturing has been around for some time, its application to the process of developing software is relatively new so I wouldn’t call it a mature process.

LSD would be a suitable method to use where you have a subject matter expert in the method who has some practical experience in applying lean methods to a software development project. “Amplified” learning implies that your development team has a depth of knowledge in the software tools provided, and also a breadth of knowledge that includes an understanding of the business needs of the client. LSD would be suitable for a project where the development team has these attributes.

LSD depends on a quick turnaround and the late finalization of requirements to eliminate the majority of change requests, so will not be suitable for a project where a delayed finalization of requirements will have a poor chance of eliminating change requests, or the size and complexity of the system being developed would prevent a quick turnaround.

Extreme Programming (XP)

Extreme programming places emphasis on an ability to accommodate changes to requirements throughout the development cycle and testing so that the code produced is of a high degree of quality and has a low failure rate in the field. XP requires the developers to write concise, clear, and simple code to solve problems. This code is then thoroughly tested by unit tests to ensure that the code works exactly as the programmer intends and acceptance tests to ensure that the code meets the customer’s needs. These tests are accumulated so that all new code passes through them and the chances for a failure in the field are reduced.

XP requires the development team to listen carefully to the needs and requirements of the customer. Ambiguities will be clarified by asking questions and providing feedback to the customer which clarifies the requirements. This ability implies a certain degree of familiarity with the customer’s business; the team will be less likely to understand the customer’s needs if they don’t understand their business.

The intent of XP is to enhance coding, testing, and listening to the point where there is less dependency on design. At some point it is expected that the system will become sufficiently complex so that it needs a design. The intent of the design is not to ensure that the coding will be tight, but that the various components will fit together and function smoothly.

XP would be a suitable software development method where the development team is knowledgeable about the customers business and have the tools to conduct the level of testing required for this method. Tools would include automated unit testing and reporting tools, issue capture and tracking tools, and multiple test platforms. Developers who are also business analysts and can translate a requirement directly to code are a necessity because design is more architectural than detail. This skill is also required as developers implement changes directly into the software.

XP won’t be suitable where the development team does not possess business analysis experience and where testing is done by a quality assurance team rather than by the development team. The method can work for large complex projects as well as simple smaller ones.

There is no law that states you must choose one or the other of these methodologies for your software project. The list I’ve given you here is not a totally comprehensive list and some methodologies don’t appear on it (e.g. Agile) so if you feel that there is some other methodology that will better suit your project, run with it. You should also look at combining some of the features of each of these methods to custom make a methodology for your project. For example, the desire to eliminate waste from the process of developing software is applicable to any method you choose and there is likely waste that could be eliminated in any development shop.

Be careful to choose a methodology that is a good fit for your team, stakeholders, and customer as well as your project. Bringing in a new development methodology that your team will struggle to learn at the same time they are trying to meet tight deadlines is not a good idea. On the other hand, if you have the latitude you may want to begin learning a new method with your project.

Posted in general | Comments Off on Choosing the Right SDLC For Your Project

Microsoft Access and Medical Private Practice

For physicians medical office software installation could be nerve-wracking, not because they want to avoid electronic medical records, but because the majority of the software packages are too complicated and very expensive for them.

The good news is, you can make your medical office software system uncomplicated and relatively easily maintained with one of the popular database software packages being used today, the Microsoft Access.

Microsoft Access is a relational database system developed by Microsoft. Microsoft Access is one of the easiest and most flexible database management solutions for the medical office and provides data validation and user-friendly features on data entry screens. It has been the dominant lightweight database system used for the last decade and has continued to grow with additional features. Access is a productive and very customizable solution for small medical practices and comes with MS Office (or standalone). However the next step up in a medical environment would be MS SQL Server but small medical offices usually only has need of a lightweight application and the added functionality with MS SQL Server, comes with a heavy price.

With this relational database system you can be up and running in one hour, which means that it is not necessary for your practice to spend lot of money to purchase, configure, update and maintain an SQL Server solution. Microsoft Access includes without any additional costs, points of integration with popular software packages including: Microsoft Word, Excel & Outlook and provides a free runtime version.

MS Access network setup is very easy. A medical office with 2-8 users is up and running within ten minutes, while installation and application maintenance is extremely simple. Virtually any user with a basic knowledge of Microsoft Access can handle all maintenance procedures without the assistance of IT personnel.

Keep also in mind that SQL Server is the flagship database system from Microsoft and it is suitable for use in environments with up to thousands of users. Microsoft Access can handle 2- 8 users and it is limited to 2 GB data storage.

We are convinced that the best way for private medical offices around the world to enter the world of electronic medical records is to purchase a professionally designed but inexpensive and affordable Microsoft Access based software solution.

Posted in general | Comments Off on Microsoft Access and Medical Private Practice