• Welcome to Valhalla Legends Archive.
 

Programming language flamewar duel to the death

Started by Banana fanna fo fanna, February 01, 2006, 04:43 PM

Previous topic - Next topic

Explicit

I'm awake in the infinite cold.

[13:41:45]<@Fapiko> Why is TehUser asking for wang pictures?
[13:42:03]<@TehUser> I wasn't asking for wang pictures, I was looking at them.
[13:47:40]<@TehUser> Mine's fairly short.

Warrior

Quote from: effect on March 09, 2006, 11:52 PM
Islam is a steaming pile of fucking dog shit. Everything about it is flawed, anybody who believes in it is a terrorist, if you disagree with me, then im sorry your wrong.

Quote from: Rule on May 07, 2006, 01:30 PM
Why don't you stop being American and start acting like a decent human?

netytan

#17
Quote from: Warrior on February 02, 2006, 04:48 PM
Quote from: netytan on February 02, 2006, 03:44 AMit just becomes too limiting. It's also not comparable with C++ for speed yet, though it will prevent a lot of memory leakage in your pants.

Just jesting guys,

Mark.

The programmer should prevent the leak, not the language.
Re-read your own posts.

Oh what a purist you are Warrior, never in my post did I say that it's the languages fault and not the programmers. Weather or not you like it memory management is much safer & cleaner than managing it yourself – take into account that most none trivial software written in this way contain a number of such bugs.

When I do want to manage my own resources I'll use ASM and get much more control over whats going on in the program, then link this in if that's it's purpose. The best of both worlds :).

You're very welcome to manage the memory yourself and gain nothing from it. If you count longer and more bug riddled programs that run marginally faster for most cases* a gain then thats up to you but having the language do it for you doesn't place blame with the language or the programmer, get over it already.

If anything removing any source of bugs makes it the programmer more responsible for writing bad code, there are fewer excuses :).

I personally would be much happier writing smaller more stable programs which are more or less guaranteed not to leak than writing slightly faster programs at the expense of these things creeping in [as they invariably do].



On a different note BrainFuck fans may be interested in WhiteSpace, a programming language based entirely around the characters we can't see lol. That said it's pretty fun embedding white space programs into others for a laugh. Running the program as language x and it'll do one thing, use white space and it'll do another. There's no real point but it's interesting all the same.

http://compsoc.dur.ac.uk/whitespace/

Here's an example of Hello world written using "whitespace",
http://compsoc.dur.ac.uk/whitespace/hworld.ws

Later,

Mark.

* GC has been around for a very long time (It first emerged in use in the mid 1950's and is under constant research & development. The result is that GC has gotten to the stage where it's efficient, fast and reliable.

Skywing

#18
You still have to keep track of allocations/references even in a GC language, though.  Especially with things like multithreaded programs or objects with explicit reference counts, you often still end up with the same having to remember to release objects as you would with just deallocating memory in large, non-trivial programs.

Tools for doing things like finding or tracking memory leaks have also come as long way too.  For instance, in Windows, I can use things like umdh and pageheap to quickly discover and investigate memory leaks in a conventional non-GC program.  Due to the deferred deletion / implict deletion nature of GC-based languages, such tools are often much less effective in discovering problems there.

I would submit that GC doesn't really absolve you of having to deal with resource management in many real world cases (non-trivial programs), and a lack of explicit control over resource referencing and deallocation makes non-trivial reference count/leak bugs in GC-based languages more difficult to find than in a traditional non-GC program written in a language like C.

(This has been my observation and experience given debugging problems in both cases.)

Certainly, your experiences may vary, but I don't believe that GC is all that it is cracked up to be given my experiences.

netytan

Quote from: Skywing on February 02, 2006, 07:57 PM
You still have to keep track of allocations/references even in a GC language, though.  Especially with things like multithreaded programs or objects with explicit reference counts, you often still end up with the same having to remember to release objects as you would with just deallocating memory in large, non-trivial programs.

Tools for doing things like finding or tracking memory leaks have also come as long way too.  For instance, in Windows, I can use things like umdh and pageheap to quickly discover and investigate memory leaks in a conventional non-GC program.  Due to the deferred deletion / implict deletion nature of GC-based languages, such tools are often much less effective in discovering problems there.

I would submit that GC doesn't really absolve you of having to deal with resource management in many real world cases (non-trivial programs), and a lack of explicit control over resource referencing and deallocation makes non-trivial reference count/leak bugs in GC-based languages more difficult to find than in a traditional non-GC program written in a language like C.

(This has been my observation and experience given debugging problems in both cases.)

Certainly, your experiences may vary, but I don't believe that GC is all that it is cracked up to be given my experiences.

Thats understandable, I personally have never had to worrry about managing resources myself in these a languages. Occasionally I've called 'del' in Python etc. but not for any real reason it just made things clearer at some point.

Reference counting in Obj-C was pretty fun for a while because it's an interesting way to do things – rather than releasing data explicitly you manage a reference counter or mark it for auto release when every object with a  reference is done with it.

I don't know what languages you've used which have GC but none today should make you do that. Letting the GC handel memory is in general much better, you can't miss anything and so don't have go to the hassel of looking for leaks.

Most languages allow you to book when the GC will activate but any relatively decent GC should be sufficient for all but the very most demanding programs, likely all being run on a PC, and then IMO they should be largely written in ASM when speed becomes that much of an issue, or compiled from a dynamic language (if it supports native code compilation) and cleaned there.

Mark.

shout