Here’s some info for anyone interested in getting into web development. It’s one thing to say you know HTML and CSS. It requires good programming skills to be a web developer.
All modern programming languages have the same commands, called differently and with various syntax styles. Every one has the ability to store values as variables or constants, conditionally test for values, set routines inside of functions with calls to run these functions, loop code to build arrays or wait until a condition meets criteria, and run as objects using classes and setting properties. Knowing the fundamentals of one language makes it easy to learn the fundamentals of another.
The first language I learned was BASIC in 1980. Back then, knowing BASIC didn’t lead to knowing others. BASIC promoted sloppy style and habits that hindered a team approach to writing giant applications. Computer Science courses then offered Pascal as their first language. Not going to college, I never learned Pascal, but when I endeavored to learn C, it was hard to shake off the idea of having subroutines called by line number. Even the concept of breaking sections into subroutines wasn’t forced. The GOTO command was hard for me to unlearn.
Eventually I understood modular C design and it helped when Perl was needed, but then I was learning C++ and the world of Object-Oriented Programming. C++ is not the best way learn OOP; it allowed for C’s methodology to creep into the code. When Java was released to the public in 1996, it became the first language in many schools. Java forces OOP throughout – while it is forcing a class structure, the inside of a class is modular, so C is easier to grasp as well.
Your computer does not speak the same language as you. It needs a translator. Some translators hang out with you, translating as it scans your code. They are called interpreters. BASIC was interpreted, as well as early scripting languages. Another kind of translator scans your code and rewrites it in the computer’s language. They are called compilers. Modern scripting languages like Perl and Python scan the human code, interprets the first-time run while compiling the code to run faster until it is stopped. Then it is destroyed. That makes it look to the human like the old interpreter, but runs faster.
A problem with high-level languages is automating the translation to executable machine code. A human writing the machine code using low-level tools as an assembler will write code that is compact and efficient. The time needed to map out the algorithm, write the many lines of code just to move bits and bytes through the various registers and units inside of the CPU, and test for bugs is not worth it where time equals money. If it takes a year for a dozen programmers to write an application in Java, multiply that enormously by writing in assembler code.
The selloff is compactness and efficiency. Compilers have to understand what the coder wrote. If the code is written well, the compiler will do its best to reuse machine code instead of writing hundreds of lines of the same algorithms. But there’s still inefficient code compiled in. The program will be larger and may be slower.
Java is unique in that the compiler does not translate human code to machine code. Instead, Java has a runtime environment, an application designed for each type of CPU. The runtime environment uses the compiled Java code and translates it from there. That way compiled Java classes will work anywhere there is a runtime environment installed on that machine.
Java also uses a concept called trash collection. The design of OOP is for an object to exist only at run time, then get destroyed after it’s no longer needed, freeing up memory. The memory doesn’t always get freed up, forcing new objects to take up more memory. This is called a memory leak. Web applications can crash when there is a runaway memory leak that eventually uses all of the computer’s storage. The built-in trash collection handles this, but poor coding and sometimes bugs in the actual Java version can hinder its ability.
I’ve written before that every project requires different skills, even if it uses the same language. A large web application will require a huge back end that is separated from the front end. Early websites were mostly static HTML code. Websites require interaction and in order to see the correct results on the screen, temporary HTML code is built from the back end tools. I used Perl and CGI in the Nineties. When Java first came out, I made some Java Applet games for my personal website. Now, it’s a lot different. The webserver handles Java and allows for it to work independently from the real front end. Today, the back end system is protected from the interface to the Wild West. Anything poorly designed in the front could allow hackers to get inside. If the front end is designed to only do remote calls to the back end, the data only accessible from the back end is protected.