- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
WTF?
Seriously man, you don't even begin to make sense here...
Flash news: the IDE is not the language, that would also be doable in Ruby if anyone was ballsy enough to create a good Ruby IDE, likewise in Python (most Smalltalk environments were able to do as good or better than the current state-of-the-art Java or C# IDE some 15 years ago, and Smalltalk is a dynamically typed language...).
As far as the line of code goes, try "yourobject.class"... yeah, you can ask your object for it's type in ruby too... or I seriously misunderstood what you asked for...
Examples please. Not the accessors (Ruby doesn't assume anything there, members are private -- always -- and accessors to variables are only created if you create them or ask for them to be created), not the datatypes either. Ruby is strongly typed for god's sake, every item has a single, permanent, unchangeable type. The difference is that it is dynamically typed, which means that a name doesn't have a type, it doesn't have anything, it's just the holder for a reference to an object that is typed. Strongly.C# or java put constraints on their names, a given name can only be bound to object of a specific type, but their type systems are not objectively stronger than Ruby's or Python's for instance. Some people even consider that they're weaker, since "the type system flees the field at runtime".
Would you have created AuthenticatedUser, AdministratorUser, WhateverUser as subclasses of the abstract User type in C#? Probably not if only because that's the kind of things you need be able to change at runtime. So you'd store the role in a generic user, you'd compose the user with it's role, in C#, and so would you do in Ruby.
Your example doesn't hold water because if I were to rewrite your code in C# it'd be exactly the same, and exactly as stupid character for character.
If two classes that may be used in the same context but shouldn't be swapped for one another have the same interface, then your trouble isn't the language you're coding in, it's the guy doing the coding.
There again, you're talking about the semantics of the application, if you need an explicitely statically typed language to tell you about the semantics of your application, or to think instead of you, or to force you to think, then something is very very wrong. And you don't write enough tests, too, btw.
Admin
Yep, and, as you say "In many respects, this is A Good Thing". I would disagree with your followup that "In many respects, it isn't", though. See, here's the thing. I have only ever, once, in many years of programming, needed to know exactly the type of an object when I wasn't writing type conversion routines. And that was due to a bug in a Java ODBC implementation for a particular database.
Well, it's as artificial as my examples, but if you're about to grant massive permissions to users you'd probably do something more like this:
<font size="2">users.each do |u|
u.class.required_access.merge(u.access_overrides).each do |location, permission|
grant (u, location, permission)
end
end</font>
...allowing you to have a hash of locations and permissions at a per-class level with individual object overrides as necessary. I would say that, no matter what the problem you throw at me which "absolutely requires knowledge of an object's class" I can throw back an implementation which requires no knowledge of the object's absolute class, and in many cases will be more generalised. This is not a snipe at you as a programmer, it's just a different way of thinking.
I would argue almost the exact opposite. First off, if you're having to rely on the compiler or IDE to keep track of what's going on, you're either not a very good programmer, or your design is completely fucked. Or you're programming assembler :)
Good design and a dynamic language are more than suitable for large projects. Last Ruby project I put live was > 500kLOC (including tests, so 150kLOC or so for the real project). It handily outperforms the feeding systems (written in Java, hahaha), was delivered early, fully functional, and well under budget[1]. Of course, you need quality designers and developers to do that sort of thing, but that's what we're supposed to be paid for, no matter how many "Paulas" there are out there.
General concensus is that Ruby vs Java gives ~ 3:1 reduction in TLOC to do the same thing and I would estimate 5:1 comparing ObjectiveC vs C++. Admittedly, it's not much of a metric but less LOC generally speaking means less bugs.
Lucky man, and yes. An awful lot more work.
The only times ruby does type conversions are : if you explicitly ask it to, using strict or loose coercion, or : in numeric coercion if you try and do arithmetic with random objects. Coercion routines are about the only time you'll ever need to actually worry about the actual class of an object.
From the pickaxe:
<font size="2">To do this, Ruby has the concept of conversion protocols—an object may elect to have
itself converted to an object of another class. Ruby has three standard ways of doing
this.
...
Methods such as to_s and to_i convert their
receiver into strings and integers. These conversion methods are not particularly strict:
if an object has some kind of decent representation as a string, for example, it will
probably have a to_s method.
...
The second form of conversion function uses methods with names such as to_str and
to_int. These are strict conversion functions: you implement them only if your object
can naturally be used every place a string or an integer could be used.
...
The third is numeric coercion.
Here’s the problem. When you write “1+2”, Ruby knows to call the + on the object 1
(a Fixnum), passing it the Fixnum 2 as a parameter. However, when you write “1+2.3”,
the same + method now receives a Float parameter. How can it know what to do
(particularly as checking the classes of your parameters is against the spirit of duck
typing)?
The answer lies in Ruby’s coercion protocol, based on the method coerce.
</font>
Simon
[1] In all my time working for major corporates, I never saw a single significant C++ project get delivered on time.
Admin
Jeez, this was not meant to be a flame war. I don't have time for this, so I'll make this my last post and as quick as I can.
masklinn, tufty managed to make sense of my post. Flash news: I never said that IDEs were languages, I was referring to the fact that IDEs respond to the capabilities of a language. If you use two different languages, then the IDE will be different, even if the software running it is the same. There's a Ruby plugin for Eclipse. This reacts differently to editing Java in Eclipse. For one thing, the dynamic types in Ruby mean you can't expect the IDE to tell you what type a variable is like you can with C# or Java.
My posts are written in clear English - if you can't read them, then I don't have the time to explain everything to you.
tufty, I have often had to check types. It's a fairly common thing in .NET where a dataset of a certain structure is often defined as a class. (This means that the structural elements such as tables and columns appear as members)
I thought of a perfect one when I was reading about it. I can't remember it right now and I have to go and do some real work, but I'll get back to you. It's not always assumptions about your code, it's assumptions about what you may need to do - such as checking types.
Nope, BOO is strongly typed, which is why it doesn't work, since it doesn't ask you for types. Ruby isn't. Try this in irb:
i = 1
i = i + 2.5
it comes out as 3.5. The object has changed from integer to float. In boox it comes out as 3. That's because BOO does not do the conversion on the fly.
Theoretically ANY two classes could be used in the same context. By your logic I need to scope all my members, removing one of the biggest advantages of OOP.
You're wrong - I can check the type in C# and tell it not to do anything unless it's of type AdminUser.
You're right about the tests, but that wouldn't help much in this situation. The tests can only test what the developer is thinking about. However, you're completely missing the point. Again. I'm not asking the language to think instead of me, but to prompt me to think. In a large application, I have a million things to think about, and sometimes I need prompting because I can't keep track of them all. If you think you're not the same, then you're deluding yourself. Or you're not human.
Anyway, my boss is standing over me, so I need to go work.
Admin
I certianly haven't taken anything you've written as flames, and it's an interesting discussion. Still, work must take priority.
Not totally sure I'm following you here. Any chance of some reading material that explains in more detail?
Yeah, of course, but you can do that in Ruby too, if you really want to. It's not considered good style, and kind of goes against the spirit of the language, but you can do it. It's a bad code smell, though, and implies that there might be something wrong at the design level.
SimonAdmin
Turns out I have less to do than I thought due to some mismanagement. Cool....
I'm glad of that, and to be fair, you've not flamed me either. But yeah, work is always there, unfortunately...
Any book that deals with ADO.NET will talk about this. Or read this: http://www.c-sharpcorner.com/Code/2004/Jan/TypedDataSets.asp or this http://weblogs.asp.net/rosherove/articles/5517.aspx
Basically the idea is that, rather than having to know your datastructure (for example, when getting things from a database, you have to specify a column to retrieve, either by name or number) you can have a class which represents that chunk of data. For a contact, rather than saying "get me the firstname column" or worse, "get me the second column", you'd simply reference the FirstName property of that dataset.
Every typed dataset is essentially a class that derives from dataset. So supposing you could get a number of different chunks of data back, all with their own structure, and you need to find out what type of structure you have, so you know how to process it. The way to do that, if you're using typed datasets, is to check the type.
Ok, yeah, fair enough. But why is it bad practice? Because the rest of the language doesn't deal with types in that way, I guess, but that's not a real reason.
The idea that you should be thinking about what types you're passing around is not a bad idea - it's a good idea. It just means more typing. That's your tradeoff.
Admin
Here are some good reasons on why you should not use public fields:
1. C# Interfaces cannot contain public fields
2. Public fields do not serialize
3. Fields do not encapsulate business rules- put the business rules in one and only one place.
I am not sure how the .NET IL code handles properties, but in VB 6.0, public fields are give default accessors at compile time. There is no performance difference in VB 6.0.
BTW, C# 2.0 has a cool new feature for properties where you can assign different scopes to the getter and setter.
Admin
Nope. 3.5 is a new object, probably of another type than the 1 object. The name i now refers to the 3.5 object. I know, it's a bit surprising if you think in terms of Java, where primitive types are not objects...
Admin
Okay, they seem to be a relatively simplistic ORM layer (I say simplistic because it seems you can't add custom logic to the classes, among other things) or at least a part of one. Fair enough. And I would accept that without any kind of introspection it's going to be difficult to do much with one without knowing its type.
The way that a dynamic language handles stuff like this (think Rails' ActiveRecord[1], WebObjects' EOGenericRecord) is that you can ask the object what table it comes from , and what columns it has. The significant difference here is that you're not necessarily asking an object what class it inherits from, but what data it contains, and act on that.
The net result might be the same, but the approach is considerably different. Generally speaking, "recordset" implementations tend to be a bit of a WTF in and of themselves, though. Here for some detail : http://c2.com/cgi/wiki?MultiplePersonalityDisorderWell, for starters, you're hardcoding all sorts of expectations in that may not be true in the future, and setting yourself up for a possible maintenance problem. Let's take the original statement (a bit hacked for brevity and readability):
<font size="2"># Grant global read / write / execute ability to directory /bin to all users
users.each{|u| grant(u, "/bin", "777")}
</font>
Now, that's pretty much the thing as originally stated, right? Now, I realise that I only want this to happen for objects of class AdminUser. Fair enough.
<font size="2">users.each{|u| grant(u, "/bin", "777") if u.is_a?(AdminUser)}</font>
There you go. Security hole fixed. But damn. I want ModUsers to have 555 (read / execute) priviledges, too.
<font size="2">users.each do |u|
grant(u, "/bin", "777") if u.is_a?(AdminUser)
grant(u, "/bin", "555") if u.is_a?(ModUser)
end
</font>
Again, no problem. Although the code is starting to smell a bit. Now, I have a RestrictedAdminUser class that subclasses from AdminUser, and I want them to have the same rights as a ModUser. Behaviour wise, they can do everything an AdminUser can do, so no shifting them around the hierarchy :)
<font size="2">users.each do |u|
grant(u, "/bin", "777") if u.is_a?(AdminUser) && !u.is_a?(RestrictedAdminUser)
grant(u, "/bin", "555") if u.is_a?(ModUser) || u.is_a?(RestrictedAdminUser)
end
</font>
It's still not too nasty, but hey, we realise that it's not just /bin that we want to grant stuff on, we also want to grant on /sbin, and the rules are different.
<font size="2">users.each do |u|
grant(u, "/bin", "777") if u.is_a?(AdminUser) && !u.is_a?(RestrictedAdminUser)
grant(u, "/bin", "555") if u.is_a?(ModUser) || u.is_a?(RestrictedAdminUser)
</font><font size="2"> grant(u, "/sbin", "777") if u.is_a?(AdminUser)
grant(u, "/sbin", "555") if u.is_a?(ModUser)
</font><font size="2"> end
</font>
We're approaching WTF-worthy code now. Then we realise that the DBA who is an admin, requires 777 on /usr/local/pgsql, but should have read-only rights on /sbin.
So, what do we do? Add another class for DBAs, and duplicate even more crap? Or do we get sensible? We start getting sensible.
<font size="2">class User
<font size="2">class AdminUser < Userend</font>
@@grants = {"/bin" => "777", "/sbin" => "777"}
end
class RestrictedAdminUser < AdminUser
@@grants = {"/bin" => "555", "/sbin" => "777"}
end
</font>
<font size="2">class ModUser < User
@@grants = {"/bin" => "555", "/sbin" => "555"}
end</font>
and then...
<font size="2">dba.grants = {"/usr/local/pgsql" => "777", "/sbin" => "444"}
users.each{|u| u.apply_grants}</font>
Well, that's a lot cleaner, and all of a sudden we don't need to know anything about the type of the user. But then we find that we need something in there that isn't, strictly speaking, a user. A daemon, for example. Different enough that it shouldn't inherit from User. Arsebuckets! But no. Easy. Pull the granting stuff into a module, and away we go
<font size="2">module Grantee
</font><font size="2"> cattr_reader :grants
attr_accessor :grants
@@grants = {}</font>
<font size="2"> def apply_grants
self.class.grants.merge(self.grants).each{|dir, perm| grant(self, dir, perm)}
end
end</font>
<font size="2">class User
include Grantee
end</font>
<font size="2">class Daemon
theninclude Grantee
end</font>
<font style="font-family: Courier New;" size="2">dba.grants = {"/usr/local/pgsql" => "777", "/sbin" => "444"}
webdaemon.grants = {"/usr/local/www" => "755"}
</font><font size="2">grantees.each{|g| g.apply_grants}
</font>
"But what if I'm a cock and manage to get something into the "grantees" collection that doesn't support granting?", I hear you ask.
2 approaches, basically:
<font size="2"># I don't care if we can't do it, just carry on as though nothing happened
</font><font size="2">grantees.each{|g| g.apply_grants rescue nil}</font>
<font size="2"># I do care, and want to log a message and a backtrace every time
grantees.each do |g|
begin
g.apply_grants
rescue
syslog << "Object #{g} doesn't implement :apply_grants but is in grantees"
syslog << $!.backtrace.join("\n")
end
end
</font>
And at that point, we're ready to do what we actually should have done in the first place, which is to pull the 'Grantee' functionality into some sort of role based granting. I'll leave it up to you to consider how that might be done, but it's not hard. And as long as we keep the interface to the Grantee module the same, we don't need to change any calling code. We can, of course, inject "Grantee" into _any_ class, including ones that aren't ours. Isn't that fun?
Note how we've just gone from a piece of code that was liable to get posted here to something that is extensible, flexible and maintainable. All of a sudden we're no longer reliant on the types of the objects, merely on what they do. We have a much cleaner design. Our objects are encapsulated. Their state is inviolate. Their type is anonymous. All is well with the worldSimon
[1] Although ActiveRecord doesn't have a 'generic' type, you must subclass for each table you want to deal with
Admin
Fucking software.
<font size="2"> def apply_grantsJust after "We start getting sensible" should read:
<font size="2">class User</font><font size="2">
cattr_reader :grants
attr_accessor :grants</font><font style="font-family: Courier New;" size="2">
@@grants = {}</font>
self.class.grants.merge(self.grants).each{|dir, perm| grant(self, dir, perm)}
end</font><font style="font-family: Courier New;" size="2">
end
</font>
Piece of fucking shite.
Admin
Cute. Not only did you show that you indeed don't know too much about C++, you also misspelled "multiparadigm".
Admin
Actually I think in terms of C#, as has been made clear. In C#, primitives are objects. What you say may be true, but the effect is the same - i is now of a different type.
I know that already. I'm trying to make you see where a language like C# has benefits over a dynamic language.
I take your point, although modern IDEs lessen this effect somewhat with refactoring tools. The issue is a case of having to sort out lots of compiler errors when you change a type of a much-used variable, vs lots of potentially hidden bugs when misunderstand the code and use a variable in the wrong way. Not that that doesn't happen with C#, it just happens less, at least in my experience.
Ewww.. That's horribly bad practice, you know. Naughty, naughty tufty! :P
Now, you see, your solution goes a little off-topic. It uses modules. I love Ruby's way of handling modules, it beats C# hands down there. However, that's nothing to do with whether it's statically/dynamically, weakly/strongly typed.
Wow. I never knew such a f*cking moron existed. How do you manage to turn the computer on, let alone get to a website? I mean when did "hey, d00d, you misspelled that thing that is not at all like the thing you were trying to say, lol, u r teh suxx0r" become a cool thing to come out with?
C++ is in fact a procedural language with classes built on top. For f*ck's sake, its only one step above the "C with Classes". I'd suggest maybe you were one of those infinite monkeys taking a lunch break from his typewriter, but if that were true I'd have trouble imagining you turning out the complete works of Fisher Price, never mind Shakespeare.
(See, you're not the only one who can act like a d!ckhead)
Admin
Ah, but therein lies the rub. Being able to inject methods into any class "after the fact", even one you don't have the source to, or to override existing methods whilst still keeping the original ones around for use, and all without having to fuck about with deriving new classes, is why, IM(NS)HO, dynamically typed languages kick the arse of statically typed ones. It was the latter 2 steps of my example that really show where the power is, the first bits were working up to why one might want to redesign the original example.
Basically, it went like this:
- Define an interface you want your objects to implement
- Extract the implementation of that interface into a standalone version
- Inject it into other classes
Which, of course, is not to say that the implementation of the interface has to be the same for all classes, although my somewhat simplistic version did that.It's true that, for example, with C++, parts of this can be done using virtual inheritance. But if you're using a class library that you don't have the source to, you're SOL with that approach. As an example, back in the day I did a fair amount of work with Roguewave (a relatively common class library for C++ which was around way before STL). It was painful to use. Objects that wanted to go into collecitions needed to inherit from Collectable, if you wanted to sort things they needed to inherit from Sortable, etc etc. Now add other class libraries into the mix, and you end up defining a bunch of pointless classes that "glue" existing classes into some other hierarchy, and you spend vast amounts of time trying to make sure that nothing clashes with anything else.
STL was a fair improvement on Roguewave, it has to be said, but then the pain occurs at the 'using code' level (and with all the various subclasses of STL containers you end up writing). Here's an example:
I want an array of hashes, where the key of the hash is a numeric and the value of the hash is a string. Then I want to extract all the values into another array
Ruby implementation:
array_of_hashes = [{1 => "foo", 3 => "wibble"}, {2 => "bar"}]
...
all_values = array_of_hashes.map{|h| h.values}.flatten.uniq
C++
std::vector<std::map<int, std::string>> vector_of_maps;
...
std::vector<std::string> all_values;
for(std::vector<std::map<int, std::string>>::iterator array_iter = vector_of_maps.begin(); array_iter!= vector_of_maps.end(); array_iter++) {
for(std::map<int, std::string>::iterator map_iter = array_iter.begin(); map_iter != array_iter.end(); array_iter++) {
all_values.push_back(map_iter.second());
}
}
sort(all_values.begin(), all_values.end());
uniq(all_values.begin(), all_values.end());
Or something like that. given that the C++ was typed direct into the editor window it's probably buggered, but you get the idea. So yeah, for C++ I would have to rely on the compiler to tell me where I'm wrong. Then, of course, there's the possibility that I might want to use some random class that has a string representation instead of actual strings in some case. Easy enough in Ruby..
all_values = array_of_hashes.map{|h| h.values.map{|v| v.to_str}}.flatten.uniq
Implementing the above for C++ is left as an exercise for the reader.
Simon
Admin
What's good about Fields?
Use as Properties, for one. Setting a DataSource property on a control incorrectly? Throw an exception in the field setter... the exception occurs at the "control.property = value;" line (which is the line at fault) rather than the "control.UsePropertyToDoSomething()" line.
What else? I think the term some people have been struggling to find is 'strongly-typed DataSets'. Want to know if a record has been modified? In the setter, set a modified flag.
Might even be used for optimisations!
e.g..
Basic circle maths
Method 1 - setting a circle object's Radius member to 10. Call GetArea(), GetCircumference(), GetDiameter()... Each call performs a calculation before returning
Method 2 (with fields) - set a circle object's Radius field to 10. The setter precalculates the area, circumference and diameter can then get the Area, Circumference and Diameter members quickly and repeatedly.
(note: not an example of optimisation technique, more to open up possibilities for different design patterns using Fields)
In fact, Fields could prove useful for Method 1 too - change GetArea() to a field called Area, and have it calculate and return the area. It just makes things nicer!
Admin
Of course you can, as I said Smalltalk environments could do it more than 10 years ago. And IntelliJ/IDEA-level refactoring too, integrated into the IDE. There is no reason that Ruby IDEs couldn't but the one that no one wants to tackle the issue. Not that I never said it was easy, it's clearly non-trivial. But it's definitely possible.
Hell, Eclipse was written by Smalltalk refugees to replicate features of Smalltalk IDEs for god's sake, there is NOTHING in Eclipse that wasn't in Smalltalk IDEs, and there are allegely many things missing. And Smalltalk was a completely dynamically typed language, as is Ruby.
Re-read Tufty's post, especially the end, the part about coercion.
This operation based on which you deem Ruby 'weakly typed' is an explicitely defined type promotion from Fixnum to Float, it's not random typecasting.
If you don't believe me, fire up ri on Numeric#coerce.
And as ammoq pointed it, 3.5 is not the same object as 1 or 2.5, you bound i to 3.5 instead of 1 but the object itself hasn't changed.
As I said, nothing stops you from writing something along the lines of "my_admin_object.class == AdminUser" if you so wish to. It's ugly and bad style, but you can do it. Behold:
Now why is it "bad style"? Because much of Ruby's Smalltalk's or Python's power come from the so-called Duck Typing principle: if it walks like a duck and quacks like a duck, it's a duck. Think Java or C#'s Interfaces, but without having to formalize them.
You're going to tell me that you could create two different classes that have methods named the same that do widely different things. Yeah, you can do that in C# too, have two classes implementing the same interface that do widely different thing. But you wouldn't do that in C#, because it'd be stupid, would you? Well you don't do that in Ruby either.
Hell, even OCaml, which is frigging strongly typed (and statically typed, much like Haskell), implements a form of duck typing: structural subtyping, the type inference system creates type compatibility between two types sharing methods with exactly the same signature. Regardless of the inheritance hierarchy.
Interface defines formal, ink-and-paper contracts, duck-typing is based on something that we could call a protocol: a much more casual, laid-back, and not compiler-enforced type of interface.
That's how Ruby or Python mostly work today, and that's how Smalltalk has worked for 25 years or so (the last major version of Smalltalk was Smalltalk-80 which was released in... 1980... And accepted as an ANSI standard in 1998)
Admin
Never play soccer with fans of dynamic typing. Those guys don't care about the difference of shooting a ball and shooting a gun [:S]
Admin
Heh.
Personally I'd kick a ball, and fire a gun.
I'd happily shoot Bjarne Stroustroup, though
Simon
Admin
Ah, nice. Only, if you're trying to not address my post unnoticed you shouldn't pick insults that lame.
Tell me: is merely your leetspeek-mockery a misrepresentation or could you really not catch my drift? My wording might have been not the most unamigiously polite one, but I didn't post what you're delineating here.
Care to elaborate on that one? Sources for your knowledge? Above all, how many steps, and which, 'above the "C with Classes"' are required to escape your ridicule?
Lol, really. That's so mind-boggingly stupid an insult. And so perfectly unrelated to everything, too.
Also, you seem to have some troubles with your understanding of the concept of infinity. (What the heck is an 'infinite monkey' anyway?)
I very much prefered Lego, btw.
My most sincere apologies if I come across like that.
Though now it's your turn to show that you indeed acted.
You wholeheartedly bashed something of which you have, by your own admission, not too much a of.
Again, please tell me, what is it that makes C++ a procedural language (with classes on top); what would be required to make it suitable for object-oriented programming; what makes (I assume that's your point of view) C# object-oriented?
Furthermore, I'd like you to clarify a thing or two you mentioned in the post I originally replied to:
"[Ruby, C#] manage memory" - what do you mean here, exactly? GC?
"[Ruby, C#] manage ... scope" - eh?
"I even have to tell [C++] how to create and destroy those classes" - I'm really not sure what you're refering to here. (As a sidenote, I rather like the possibility of defining a destructor opposed to the convention of dispose()-methods.)
Please enlighten me.
the_infinite_monkey
Admin
It should read, of course, "too many knowledge of".
Admin
Why? Can't take a joke?
Admin
Sorry, but again, I don't have time to answer all the points. I'll try and get round to them later.
tufty:
If you'd started talking about the way that Ruby's class definitions are, in fact, executable code, then I'd be right with you. But as far as I can see, you weren't.
See, C# could have something like mixins, but MS missed the boat there. C++ had multiple inheritance. With C#, they decided that it caused more trouble than it was worth. But then with Ruby they came up with a nice compromise - you (unless I'm mistaken) inherit from one class, but you can include other blocks of code.
Now, the injection of class members is really just a case of partial class definitions, which, again, could in theory be done in a strongly, statically typed language. There'd be rules as to which classes could inject what into your class, I guess.
masklinn:
Sorry, I don't have any experience of Smalltalk. I'd be interested in hearing how it can achieve evaluating types at design time for a language where types are evaluated at runtime. I suppose it could trace a variable back to its creation. But considering my earlier example, in Ruby:
i = 1
print i
i = i + 1.4
print i
Now, if I hover my mouse cursor over the first print i statement, would I see "Integer"? And if I hover it over the second, similar, statement, would I see "Float"?
And infinite_monkey, wtf are you on about? I really have no special interest in what you are saying, because you seem to have no special interest in making sense. Good day to you.
Admin
It's a reference to the "infinite monkeys with infinite typewriters" theory. Look it up on Wikipedia. And the sentence does make sense. An 'infinite monkey' doesn't make sense, but 'one of those infinite monkeys' does. See, you can say 'one of those x things'. 'One of those 5 apples'. 'One of those 10 cars'. 'One of those infinite monkeys'.
However, the word 'infinite' was really there so you knew which monkeys I was referring to, rather than as a quantitive clarification.
No I did not. I said it was a procedural language with classes built on top. That's not bashing it, unless all procedural languages are crap. Which they are not. It's simply an observation as to the prevalent way of working.
I dunno, maybe I'm just being unreasonably pedantic, but in C#, Ruby, Java (I think), you are working with classes right from the word go. With C++ it all starts in much the same way as a C application, and then you actually have to *invoke* the part of C that knows about classes. In C++, for example, the application itself is not an object. Lots of datatypes in C++ don't have members you can work with. No one of these kills its OO implementation, and even together they don't kill it. But IMHO it just has too many hold-overs from the procedural languages to be truly OO. That's not a bad thing, particularly, it just means I don't really want to work with it.
Well, in a sense these are all related to the same thing. Have you ever used C# (or something similar) before? If not, then you may not know about its garbage collector. Essentially, in .NET and Java, and Delphi to a lesser extent, most objects will be cleaned up automatically when they go out of scope. There are exceptions, which is why the dispose() method exists.
dispose() vs destructors? A stylistic choice, really. The benefit in OO languages that support garbage collection is that you usually needn't worry about releasing pointers too much. Memory leaks are much less of an issue (though .NET itself has a few leaks, I think).
Admin
AFAIK, at least in Java objects are nor immediately cleaned up when they go out of scope. It's rather unpredictable when the objects will be cleaned up. The advantage of garbage collection over reference counting is that unreachable objects with circular references will be cleaned up as well.
Admin
More or less yeah, you'd probably see "Fixnum" here but the spirit's the same.
Likewise, automated refactoring was first integrated into Smalltalk environments (the Refactoring Browser, more than 15 years ago) with the stuff you see now in Eclipse or IntelliJ: rename whatever you want (method, class, instance), extract method/class, inline method, move method, add/remove parameter, etc...
The way this was done is that Smalltalk environments were (and still are) running and "live" smalltalk runtimes, when you write a method the method is interpreted (or reinterpreted if you're modifying) on the fly and you and the environment had the full power of smalltalk introspection and reflection, which means that it knows or can know the type of more or less every object.
Much like what Eclipse does now (remember that Eclipse was started by ex-smalltalkers), but better, more refined, and executed on the fly as a permanent (continuous) process instead of a one-at-a-time discrete run.
The core of it all is that dynamically typed languages require the code to live, be interpreted, be executed, be observed from the environment itself (which therefore has to be written in the language), and that it's not easy to do. This is one of the reasons why there aren't many advanced IDEs for modern dynamically typed languages. That, and the fact that most people use interactive interpreters (often embedded in the editors/IDEs) to manually do part of what Smalltalk environments do automagically.
Reference counting is a garbage collection scheme (that can handle circular references) though...
Admin
Ok, that is actually true. Kinda. It doesn't usually happen immediately because the GC has to actually come round to that area of memory. You can explicitly call it, but that kind of defeats th point.
I've known people have all kinds of issues with ref counting, though my experience is of Windows Installer components not being removed at the correct time. Not sure about Java, but in C# any object which does not have a valid reference pointing to it is declared out of scope and cleaned up. There are exceptions, as I said, and these can be created and destroyed explicitly.
Admin
That post was to ammoQ. masklinn posted at the same time.
Interesting....
I'd be interested in seeing how it does that if Smalltalk is dynamically typed. Like I said, one way would be to trace every variable back to its creation, checking conversions along the way. However, I'd imagine that could be fairly slow, and possibly unreliable (since it may be impossible to predict the type at design-time, since decisions about what type is being used might happen at runtime, and this is more common with dynamically typed languages).
Admin
Well, I knew what you referred to, but _you_ should look up 'infinite', it might not exactly mean what you think; but yes, I'm being rather pedantic here and it doesn't matter anyway.
Yes, you're right with the first sentence; it just sounded like you did, and I jumped the gun. However, the main point's still valid: it's no procedural language, it's a multiparadigm language. It provides the means to design object-oriented, period. I don't see the possibility to apply procedural techniques as a drawback, rather the contrary - but then I'm no OO-Dogmatic.
Now that's rather irrelevant even if it were true. And if you're really too embarrassed to use a free function (opposed to sensible constructs like e.g. java.lang.Math :rolleyes: ) then you can still try
and hide it somewhere hoping nobody notices. That way you can even say, more or less, that your 'application itself is an object' - whatever that's supposed to mean.
I'm not too sure what you mean here. You miss autoboxing for PODs? Or does it have to be the Ruby way, true objects?
So these points boil down to C++ doesn't have a GC and nothing else. Well, C++ (or even C for that matter) isn't exactly incompatible with garbage collection - try google.
As for the "create and destroy" - it's the same with other languages, apart from being able to add code getting executed upon destruction, isn't it?
(Yes, I've worked with GCed languages, and don't even say it's an inherently bad thing.)
Well, here you're rather wrong. It's not about memory- but resource-leaks; and I've to say I prefer RAII over awkward constructs. (Apart from the fact that the destructor can be a nice place for logic, too.)
But there's yet another question you didn't answer: what makes a language object-oriented?
the_infinite_monkey
p.s. Just read your other post. So don't feel pressed to answer if you think I'm just making baseless assertions... oh, and good day to you, too ;-)
Admin
I got lost in this argument, so it is a good place to ask some background questions. I haven't kept up on OO dogma since 1990 or so, and I was wondering if somebody with more current terminology could explain:
Admin
I know what it means. Yes you are being pedantic, betraying a misunderstanding of the nature of the English language.
Well, name a reasonably modern language that is not 'multiparadigm'. Even C had a very limited kind of OO functionality in its structs, and even C# has procedural concepts. C++ sits on the fence, perhaps, but the fact remains that the basis is C.
I like the Ruby way (or C#/.NET) - It's not absolutely a requirement for a language, but I just like it.
Clearly you misunderstood what I meant. In C# and, I assume, Java, you can get certain information on the application and process that is running your code as members of an object that represents your application. Such as Application.ProcessId and Application.ExeName (from memory - I don't doubt that I got the name swrong but hopefully you get the idea).
In itself it's not a big thing, but it's just an example of the things you can do when you take OO far enough.
More pedantry. Memory is a type of resource. It's the one most prone to leaks. Ergo, memory leaks.
Besides, the GC really deals most with memory leaks.
Maybe you haven't read my posts properly
Admin
It isn't enterprisey - it's ajaxy. And there's a whole new wave of this stuff comming our way, since it's the topdog of buzzwords at the moment.
Admin
Care to explain? You're talking about non-standard English? Or can't you just get your head around the meaning of the word 'adjective'?
Very practical, movable goalposts.
Well. I'm not sure whether you got my argument and chose to ignore it or not. And I'm still rather undecided which option makes you look sillier.
Maybe you can't comprehend mine.
Admin
Nearly forgot to ask yet again:
What makes a language object-oriented?
(No need to break your habit of not merely dodging but completely ignoring this one. I have great confidence in your inability to answer it, but it's fun to ask anyway.)
Admin
No idea, maybe POO "outgrew" Procedural Programming, became a programming scheme by itself and rose on to an equal footing with PP, therefore promoting it's language to a class equal to and separate from procedural?
I don't see why not. Your intent is probably less clear but OO is more of a method and philosophy.
I sure hope not, and some OO languages are considered "multi-paradigmatic" in the sense that they don't try to force OO style down your throat.
Java and C# are mono-paradigmatic (they try to force you to use OO, and fail if you really don't want to), Ruby or Python are multi-paradigmatic (they have quite extensive OO capacities, but they won't stop you from using extremely procedural or functional styles).
Having an OO language is merely having the potential to easily create OO programs, it doesn't mean you can't fuck it up.
I must say that I have no idea, but to me the baseline of OO is not any form of inheritance, it's polymorphism. Without polymorphism you lose many OO constructs, inheritance is merely one way to implement polymorphism and add code reuse to it.
Admin
A language is OO if lexical scopes are usable outside the function that created them. Stop posting in this thread, the_infinite_monkey.
Admin
It's not a relevant word - maybe it's you who can't get your head round it.
You said something I said didn't make sense, when it did.
Maybe you're just one of these people who, with a lack of actual points to make, tries to pick holes in somebody else's English. But that falls apart when that person speaks perfectly good English, so what do you do then? Pretend they don't? Pretend that the language has absolutely rigid rules which cover all its forms? Apparently, that's exactly what you do, which is why I called you a d!ckhead - and you seem to be intent on justifying that description of you. Or perhaps you just don't understand the way the English language works.
If it's the latter, don't feel bad - it's better than the alternative. Maybe you're not a native English speaker, in which case your English is very good, but I wouldn't go around accusing native English speakers of having bad English if I were you.
Or perhaps you're from the USA, and that's not a slur against the USA, it's an effect of the fact that we speak (almost) the same language. North Americans have often been accused of not being aware of the world outside their borders. There's probably some truth in that, and if you are American, you should remember that there are other countries in the world that speak English, not least England. They all have their own mannerisms and customs and ways of speaking. And you should also remember that on the internet there's no way to tell where somebody is from.
Of course, there's still the possibility that you're just a pillock.
Don't even begin to think that just because somebody speaks slightly different from you they are somehow deficient. You'll get nowhere in life if you take that attitude.
Admin
I have done something similiar, and by that I mean there was a sane reason I did it.
I wrote a program in C that would connect to Oracle using OCI and hard-coded the username and password in the source.
But, there's a program in Unix that will strip out and display C strings that are inside an executable. So if someone runs (I forget the name, I haven't touched Unix in 4 years, I'll call it strings) on the executable, presto - username and password for the database account.
So to get around it, I built up the username/password strings using single chars. When someone runs strings on the executable now, they only get single characters which are lost in the background of other crap the strings program produces.
Admin
There's similar apps for Windows - I think reshacker will drag strings out of a binary. I know that ProcessExplorer from www.sysinternals.com will do it. I've never used that feature, I just happen to know it's there.
Admin
strings is the name of the program
emphasis added ;-)
Anyway, it depends on the environment. Any chance an evil hacker can get the executable? If yes, how much damage could he do anyway?
Admin
Whoops.... this was suppose to get added to the Functional Encryption thread.... Not sure what I did to have that happen.
Admin
Ok, if you say so I will.
the_infinite_monkey
Admin
Your writings betray your being a liar, unless you have someone proof-read and correct your posts before submitting. I might be wrong, though, I must admit: maybe the metaphor of infinitely many monkeys typing away describes you quite aptly, and your sentences, on the whole, agree only accidently with English rules of grammar.
I grokked what you meant, sure enough; from that doesn't necessarily follow that you were making sense. And in fact you didn't. Possibly, considering the 'infinite typewriter' a reasonable contraction, you were just wrong.
Wow. Did you hear that? My irony-meter just exploded.
Re-read our posts, and even you might see that it's you who's always responded with insults in lieu of rebutting my points or admitting you're wrong or not as informed as you thought. I concede that you made some vacuous statements and simply ignored a few of my points, though.
So you're a psychic? Or do you refer to my first post, wherein I used, tongue-in-cheek, the word 'misspelled'? In that case I can only hope you're deliberately obtuse.
Being a native speaker or not has relatively little relevance when examining the command of language of specific individuals. Even more so if it's about written language: my perception might be wrong, but it seems that e.g. the curiosity of being in utter confusion as to when to use "its" vs "it's", "there" vs "they're" vs "their", etc is mostly found among native speakers.
And btw, I never said that your English is bad - but I'd see no problem with pointing it out if it was.
Yes, there's that possibility from your vantage point; just like there's the possibility that you're not unnecessarily worked up and unable to defend your position but merely a crude dimwit.
So it seems you're indeed transcendentally gifted, eh? Remote mind-reading, very nifty.
the_infinite_monkey
p.s. Sorry Johnny H., won't do it again if I can help it...
Admin
I won't bother responding to the rest of the meanlingless drivel, but...
But maybe that's the problem - you can't help it. You can't help being a pillock (I've concluded from your last post that you are just being a pillock).
Admin
Oh, and yeah I was referring to your first post. Your 'tongue-in-cheek' remark was a stupid thing to say that no worthwhile human being would come out with.
Admin
The last part of the threat reminds me of my kids. They are 3 and 7. When they quarrel, they are about as childish.
Admin
lol