you could also say: c assumes you care about memory, but python doesn’t.
Python doesn’t really that there is memory. You still have to be careful of running out of memory. However, it is generally used in an environment where a kilobyte is pretty small, and Ease of use is more important than speed and memory efficiency.
C is well suited for embedded programming because it gives you very low level control over your memory.
I started with C#, but I think I have good formatting. I’m not so great with commenting though.
“You want to learn to program, rather than learning to just write code.”
For sure. Some languages, however, do tend to teach you programming. Functional languages, like F#, often restrict imperative programming, which forces you to write functionally. Then, when you go back to a language like C#, you can apply functional concepts.
There are probably more, but those are the one’s I can think of.
My favorite: ObjectiveC and C++. I like the syntax of ObjectiveC as well as the object model. If you need to, you can always bail and just use regular C/C++ from within ObjectiveC -so you get both. C/C++ is a language for computer scientists that allows for flexible object modeling while retaining the ability to alter bits at a certain address if that is your pleasure.
My least favorite - A tie.
VB and VB .Net are simply “toy” languages. There are severe limitations to the object model that make VB suitable only for applications that are relatively simple and less complex.
Javascript - Lets face facts; Javascript sucks. Yeah it is the programming language of the web. But that does not make it good. It has no compiler. Therefore, it has no syntax checking or debugging capability. The syntax is ambiguous and EVERYTHING is an object, which can lead to obscure errors and runtime faults that are nearly impossible to trace. I am constantly amazed that I am runtime-debugging “modern web applications” using the equivalent of "alert(“SomeValue = " + value);”. What a joke.
Right now I am working on technology to use in-home sensors for medical applications. Arduino boards, embedded C and micro-computers with Debian Unix; Javascript/angularjs for the front-end. Data is stored on Amazon web services (AWS).
How does VB.Net differ from C# (Besides syntactical differences)? It seems to me that they always do things the same way (with a few exceptions).
JS is supposed to run in the browser, so it wouldn’t really be practical for it to be a compiled language, right?
It’s also supposed to be used only for client-side scripting. Any code that requires structure and robustness is more suited for the server anyway.
You mentioned that everything is an object in JS. It’s the same with ruby as well.
That sounds pretty cool. That’s quite an array of technologies you have to be familiar with though.
Since you’ve done embedded programming, have you by any chances tried using the Ada language?
I don’t have enough classes open for senior year otherwise I would. I’d learn it on my own but I’m too busy designing a FIRST robot and working with Linux on an old PC.
VB’s object=model is not as complex as the object model of C#. VB may have added new features, but to my recollection the VB object model does not allow derivation to go deeper than two-levels; no does it allow multiple inheritance . More impotantly, it has no facility for interfaces or protocols - which model behaviors as objects. These are crucial for many Design Patterns.
Yes Javascript or, ECMAscript, is interpreted. This is more for practical reasons than for interpret vs. compile performance. If Javascript was not interpreted, the net would be over-burdened by transmitting large object-code programs instead of Javascript to be intrepreted on the client. It is more about bandwidth than computer performance. Indeed, early Javascript was SLOW. The newer Google-Javascript interpreters have made this much more of an even playing field.
Yes Javascript can be runtime observed. Firebug and “Inspect element” (Safari, Chrome and IE) are great but they are not debuggers. They offer a programmers view of the runtime Javascript stack. It is useless for out of scope debugging. It cannot trace values or observe in the background. The fact that I use “alert” more often than running Firebug should be a comment on its usefulness. Nothing available to “alert” is available in Firebug. It is just a marginally useful interface. For really tough bugs - I have found it wanting (object = “undefined”; - yeah; thanks for that…).
Javascript has traditionally been used for client-side HTML scripting. Modern frameworks such as Backbone and Angular blur this line by binding values in the HTML to values on the server - such as in a database. Libraries such as Nodejs run exclusively on the server.
Coffee script’s compilation capability is indeed one of its strong-points. It allows the developer to debug the application instead of just watching it run. Coffeescript is a technology I have never used. Those that I have talked to, however, rave about it. It seems to appeal to Apple people especially.
Ruby… is hard Rails is easy. But the language Ruby is actually one of the most difficult I have encountered to truly master. Sadly, I am more of practitioner than a real developer. Ruby on rails is great. Programs in pure-Ruby can be daunting.
I do not do embedded programming beyond C stuff. We have an Electrical Engineer that solders chips and leads and worries about race-conditions at start-up. He does all the embedded stuff. Thankfully I just push Hex values and function arguments CallDeviceWithArgs(0x8FE54, “Hello World”) or something like that.
To aspiring programmers/developers: Learn Javascript. Regardless of my opinion; it is still the most widely used programming language in the world. If you are not going to get a Computer Science degree - learn Javascript and web-frameworks like Backbone, coffeescrip, angularjs or Ember. If you are going to get a degree in Computer Science - learn a real TYPED Object Oriented language. That will allow you to do anything; with any type of hardware. I recommend C++, Java; or Objective-C. If you can understand that; anything in Computer Science should be within your grasp.
I used to be a Perl developer (making content management systems), and I wrote Dave’s Skill Toys site all in Perl. The Museum was also written from scratch by me in PHP/MySQL and that’s still my preferred language.
JavaScript. I mean, I can pick up pretty much anything and have a hack at it, but I spend 99% of my time in JavaScript.
For the presentation layer, I’m pretty keen at CSS… it’s not programming, but to use it “properly”, it helps to have a firm understanding of logic.
Lately, no matter which web framework is involved, everything is glued together using Grunt on node.js. It’s really an old-school approach and not nearly as integrated as you would get with a compiled language and an IDE. It’s throwback-ville. But yet, having build scripts and precompiling your LESS/SASS, running minification and concatenation tasks and deploying to a web root… that’s still a bit ahead of what I was doing 5 years ago which was a collection of flat files and using the IDE as a glorified text editor (the “I” part of IDE… not so much…).
JavaScript doesn’t deserve its bad rap… it’s a useful and robust language. Just remember to ferret out or use best practices to avoid those damned undefined’s…
I happen to only program in C# and Haskell! My two favorite languages. I’m not working on any big projects in Haskell, but I am making a roguelike in C#.