With my introduction into web development coming from a static html/css direction, I've had to overcome several bad coding habits over the years. Most of these started with poor assumptions about the way websites worked. It took a crash course in php programming and a few hard bumps before realizing how wrong my first websites really were. Here are a few of those assumptions and how I built on them to increase my programming skills.
Each URL relates to a unique file on a web server
If you're only used to working with static html files than this assumption is largely true. Every different URL that a user visits on a web page is in reference to a different file on the server. Without any type of scripting, the file is merely a chunck of html and can only be changed by a webmaster physically manipulating the document. My first dynamic website used php include commands to pull a fixed header and footer for all of my pages, allowing me to make global site changes from a single file. When I finally started to learn more about php variables and url manipulation, I kicked myself for the extra time and work I spent creating larger sites. There are a number of techniques that you can use to have a single script output multiple pages on a site, a concept I discussed with htaccess rewrites.
One script to create a website
Once I got over the first assumption, I swung over to the far extreme and started using a single file create an entire website. This was beneficial because all of my functions, variables, and logic could be easily used in one place without worrying about scope or difficult includes. The downfall here is complexity; a single error could bring the entire website down. Also, as the site grows and you ap more logic to the file, maintaining this file becomes more and more difficult. While this may be tempting for beginning sites, I've found that the best solution is a balance between breaking up scripts into multiple files while consolidating common logic into central scripts... A balancing act between have too many and too few separate files.
Storing data in flat files
I started working with handling data outside of my website several years before I learned what a database was. Whether it was with iCal flat files to store news and date information or xml for more complex needs, I'd use php to pull and manipulate the data within these files before outputting the file html. While keeping content outside of html structure is a great idea the speed and amount of work needed to handle large data sets was ridiculous. The only way to sort or filter this information involved loading all of the data into multi-dimensional arrays. Also, a server change later on rendered the entire website useless, as it reset the permissions to read only. The proper way to store large amounts of information is within a database. After working with databases over the last few years, I have a hard time thinking of a justifiable reason why storing any sort of dynamic data in a flat file is a good idea.
Everything in a database must be normalized
After learning about normalized data and how clean it can be, I made the mistake of making my hiking map's tables as normalized as possible. This is the process of going through and eliminating any repeated chunk of data by creating separate mapping tables. This is a good idea if you're worried about the amount of room a large table takes up, but its easy to go overboard. To get any sort of usable information from my hiking map tables requires complex (and slow) joins. I'm not advising against normalizing database tables... But that ease of maintainence and simplicity/speed of queries should always rank higher than normalization.
These are only a few of incorrect assumptions I've made over the years with web programming, though I have every suspicion that some of my current techniques warrant to be on this list. Since I'm still getting used to Obect-Orientated Programming and data abstraction, I've no doubt that I'll be revisiting this idea in the near future.