A note of caution: The title might be misleading. It is true that journaling file systems solve the above-mentioned problems, but they introduce new ones.
The idea of behind journaling file systems is to keep track of changes to the file systems instead of the system's current content.
To explain this better, I am going to make an example under ext2 file system's principles and then under a journaling file system:
What happens when we change the contents of file "test.file?" Let's assume that the inode for "test.file" lists four data blocks. The data for "test.file" resides at disk locations 3110, 3111, 3506, and 3507 (the gaps are there because during the initial allocation of disk blocks, those in between 3111 and 3506 were already allocated to some other file(s). We therefore notice this file is fragmented. The hard drive will have to seek to 3100 area on the disk surface, read two blocks, then seek over to the 3500 area and read two blocks to read the entire file. Let's say you modify the third block; the file system will read the third block, make your changes, and rewrite the third block, still located at 3506. If you append to the file, you could have blocks allocated anywhere.
In our first example with the "test.file," rather than modifying the data in the 3506 block, a logging file system would store a copy of the inode of "test.file" inode and the third block in new locations on the disk. The in-memory list of inodes would be changed to point "test.file" to the new inode as well. All changes and appends and deletes would be logged to a growing part of the file system known as the "log." Every once in awhile, the file system will checkpoint and update the on-disk list of inodes, as well as freeing the unused parts of files (like the original third block of "test.file").
Such a journaled file system, after a system crash, will come online almost immediately. All it needs to restore is at most a few blocks, which it has readily available in the log. A fsck after a power failure will take less than a second.
That's what I call a problem well solved!
Obviously, there is a price to pay for this extra-safety: overhead. There are more I/O operations on the disk for each update and most logging operations require synchronous writes. The question for the sysadmins is whether they are willing to sacrifice overall system performance somewhat for a much safer file system.
Most sysadmins decide this on a case-by-case level. It doesn't make much sense to put the /usr directory on a journaled file system because you have mostly read-only operations for that part. But you would surely put a journaled file system under the /var directory or under a directory containing e-mail spool files. Luckily, you can mix your Linux file systems as you wish.
One more problem with journaling file systems is that they get easily fragmented. Due to the nature of its allocation file system with journaling soon end up with blocks anywhere on the disk. This is also true for ext2 file systems. A dump of the file system to tape and a restore scheduled on a regular basis every month or so, will not only fix the problem, but it will also check your backup/restore procedures. Never say a disadvantage can't be turned into an advantage, right?