• Welcome to Smashboards, the world's largest Super Smash Brothers community! Over 250,000 Smash Bros. fans from around the world have come to discuss these great games in over 19 million posts!

    You are currently viewing our boards as a visitor. Click here to sign up right now and start on your path in the Smash community!

The Is Ought Problem, Theistic Morality, and If Ought Moral Rationality

Sucumbio

Smash Giant
Moderator
Writing Team
Joined
Oct 7, 2008
Messages
8,166
Location
Icerim Mountains
@ AfungusAmongus AfungusAmongus would appreciate this thread i wish I'd read it sooner. The op's explanation of the is ought problem is not only well put but it completely explains why facts cannot lead to should-dos. My water example is perhaps null and i see why. We do drink water, fact, but it's from a subjective desire to not die of thirst.

Given this however leads me ti question the importance of Hume's assumption. Why does it matter? If we cannot morally believe it is wrong to kill another human based on scientific studies of animals that avoid killing their own kind, or as in the previous topic, the argument was altruism in nature means altruism in humans is best, well then is the op right and we should tax ourselves morally by starting with the desired end product? Is this a slippery slope?
 

Sehnsucht

The Marquis of Sass
BRoomer
Joined
Feb 9, 2014
Messages
8,457
Location
Behind your eyes.
You Ought Not necrobump, @ Sucumbio Sucumbio . :awesome:

My current moral hypothesis has come to similar conclusions than those outlined in the OP. Facts alone have zero moral value; it is only when considered in relation to an agent's desires that facts gain some moral substance.

But by that alone, you can't derive "right" action from "wrong" action from such a framework. If you want to murder someone, then whether you go through with it is contingent on the facts at hand -- your desires, your values, and taking into consideration the consequences and repercussions of your desire for homicide, if acted upon. If you want to grow sweet potatoes and hand them out to people, then whether you go through with it is contingent on the facts at hand -- your desires, your values, and taking into consideration the consequences and repercussions of your desire to grow and distribute carrots, if acted upon.

It's an If-Then formula. IF you desire X, THEN you should proceed with course of action Y. "Oughts" are contingent creatures.

The OP also touches on what I've come to think concerning theism. That God exists tells me nothing about whether I ought to worship God, or ought to pursue Good actions, or ought not pursue Evil actions, etc. etc. In the end, it all comes back to the agent's desires, and what outcomes they wish to bring about for themselves and/or others.

The OP calls their If-Ought system "Moral Rationality", though I think it's basically Consequentialism, broadly. Moral substance is centered not on the character of the agent (because morality concerns actions), or in the moral quality of an act in itself (because actions don't exist in a vacuum), but in relation to an act's consequences, and which of these repercussions are desirable or undesirable for the agent(s) involved.

Anyway, if I understand your query, your asking how one might derive a moral system (i.e. rights and wrongs) from facts of human biological and cultural evolution, as well as factors of physiology, psychology, and so on. That humans, as social mammals, tend toward pro-social behaviours, tells us nothing about whether we ought to behave in pro-social ways. Is this the question your posing to the floor, Sucumbio?

If so, then I think the OP lays it out pretty adequately. Due to the Is-Ought problem, facts of biology and sociology have in themselves zero moral substance. But I'd say that we can derive Rights and Wrongs by examining the common denominators of the human experience. If the majority have the same basic desires, then we can collectively agree on which behaviours and actions to pursue. And we can recognize that if we want to fulfill our desires, and everyone has common desires, then working together for the mutual fulfillment of our desires will be more advantageous and efficient than trying to go at it solo, each on our own.

So, if it is the case that most humans are inclined to want to survive as long as they can, then we can take all acts and behaviours that satisfy that inclination and classify them as "right" actions, and take all acts and behaviours that impede that inclination and classify them as "wrong" actions. IF I want to live for as long as I can, and I know most others want to live as long as they can, and that the chance of accomplishing my goals increase with mutual cooperation, THEN I should share my sweet potatoes (good actions), and should not go on a killing spree (bad action).

This is not an objective morality; it doesn't extend beyond the human experience. But it works with the grain of the human experience, and not against it, and its application and results can be objectively measured. Most people want to live, are capable of empathy, don't want to be harmed, etc. If we're going to construct a set of moral guidelines, let's work with tangible and demonstrable facts that are relevant to the experience of living, and consist of the common base denominators of our (collective) agency.

But at this point, I'm just recapping the gist of the OP's thesis. I suspect that I may be misreading your question, so I invite you to show me the rod and school me without mercy.
 
Last edited:

AfungusAmongus

Smash Apprentice
Joined
Jul 27, 2013
Messages
164
Location
Ohio
Why it matters: the Is/Ought principle identifies bad arguments. For example, attempts to make any non-trivial system of morality "universal/objective" (i.e. that gives reasons to everyone) are doomed to rely either upon moral premises (losing objectivity), or else upon false inferences from non-moral premises.

I don't know what you mean by "tax ourselves morally" or "starting with the desired end product". I can take this in any of several ways (forming/examining moral codes, consequentialism, desire-utilitarianism, ...). Could you elaborate? Thanks for the thought btw :)
 
Last edited:

Sucumbio

Smash Giant
Moderator
Writing Team
Joined
Oct 7, 2008
Messages
8,166
Location
Icerim Mountains
Yes and yes. And yes. To both of you? Hm... Basically the op proves why the is ought problem is correct, and offers consequentialism as am alternative. Im okay with this. My issue was this... Why does it matter that our subjective desires be used as the basis of a moral truth, when that particular desire is for all intents and purposes unavoidable.

We drink water to live. It may be that hume says we drink water out of a subjective rationale to not die, but i question the truth of this. Can you think of someone who would choose to die of thirst everything else being normal?
 

Sehnsucht

The Marquis of Sass
BRoomer
Joined
Feb 9, 2014
Messages
8,457
Location
Behind your eyes.
Yes and yes. And yes. To both of you? Hm... Basically the op proves why the is ought problem is correct, and offers consequentialism as am alternative. Im okay with this. My issue was this... Why does it matter that our subjective desires be used as the basis of a moral truth, when that particular desire is for all intents and purposes unavoidable.
To base a moral framework on one's desires may be deemed arbitrary -- in that, lacking an objective referent or standard, you're free to choose anything to serve as the basis of your framework -- but I'd say that if you're going to construct an ethical standard from the ground up, you may as well choose something fundamental, pervasive, measurable, and relevant to the human experience and human decision-making process.

>Morality concerns one's actions, what we should or should not do for a given situation;
>Agents, being aware of their choices, can consciously direct their actions;
>Humans classify as agents;
>The impetus of all action in humans is desire and/or values (what you want and/or what you care about);
>The actualization of desire and/or values are usually contingent on the variables at play (the current variables and projected repercussions of one's choice).
>Desires and/or values that are not acted upon in any way exist in a vacuum, and so lack moral substance;
>Desires and/or values acquire moral substance when considered with the potential or actual consequences they have upon others;
>Consequences are that which possess moral substance, and are subject to moral consideration and scrutiny.

Moral inquiry seems to always lead back to the agent as the center of morality (they being that which drives moral phenomena). And human agents only ever do things because of desires, values (and, to an extent, subconscious cues like instincts)***.

***Desires and values are rooted in baser principles, like instincts and biology, which were further molded by evolution. The Pain-Pleasure aversion-reward preference binary is at the root of emotion, and therefore desire. But this preference binary is expressed in the conscious mind as desires, so we can work with them. and set them as the basis of moral examination.


We drink water to live. It may be that hume says we drink water out of a subjective rationale to not die, but i question the truth of this. Can you think of someone who would choose to die of thirst everything else being normal?
If a person wishes to fatally dehydrate themselves, it stands that it's by extension a wish to die, or kill themselves.

Most people are inclined to want to survive. I say inclination, because there are factors that can lead someone to want to die instead of live. An example of this is depression, which can lead to suicidal thoughts.

But such people's brains are being affected neurochemically, so they aren't in a grounded, default state of being. Would a person in a sound, rational state of mind (by all definitions) ever come to desire suicide? It's conceivable. Perhaps they've rationally come to the conclusion of antinatalism, and so kill themselves for perceived justified philosophical reason. Perhaps they're in an objectively disadvantageous situation (e.g. trapped somewhere), and suicide is deemed the more desirable alternative to prolonged suffering and danger. And so on.

As for electing to die of thirst, it depends on the situation. Maybe the person is tired of trying to survive in an environment where water is scarce and a ***** to find, and so come to deem death as preferrable. Maybe the person will forego hydration for the sake of others, so that they can get more water (because the agent values their lives over their own).

I don't think that a person sitting on their couch will suddenly want to dehydrate themselves to death. If most people are, in a sound state of being, inclined to want to survive for as long as possible, then a person will be much less inclined to want to kill themselves on a whim, or out of boredom, or so on.

I imagine that such a scenario (voluntary death by dehydration) doesn't happen very often, if at all. But even if it's rare, it doesn't undermine the primacy of agents, desires, values, and consequences as the basis of our consequentialist system.
 

AfungusAmongus

Smash Apprentice
Joined
Jul 27, 2013
Messages
164
Location
Ohio
Yes and yes. And yes. To both of you? Hm... Basically the op proves why the is ought problem is correct, and offers consequentialism as am alternative. Im okay with this. My issue was this... Why does it matter that our subjective desires be used as the basis of a moral truth, when that particular desire is for all intents and purposes unavoidable
Careful! You're referencing a few different theories. Consequentialism defines the moral worth of actions in terms of their results; subjective moral truth characterizes moral relativism. Meanwhile your question attacks desire-utilitarianism (a consequentialist moral theory that defines the moral worth of actions in terms of desire-fulfillment), and I agree that the theory doesn't seem to matter. It sneaks universal morality past Hume at the cost of being trivial. You've said nothing against consequentialism or moral relativism.

We drink water to live. It may be that hume says we drink water out of a subjective rationale to not die, but i question the truth of this. Can you think of someone who would choose to die of thirst everything else being normal?
We (usually) drink water because we're thirsty (and we want to drink when we're thirsty), and because we know it's good for us (and we want to do things that are good for us). Everything else being normal, nobody would choose to die of thirst. We don't want a philosophy that applies only when things are normal. Moral psychology should coherently explain hunger strikes and suicides, as well as drinking water. Hume does this in terms of competing passions (desires). Hunger striking is thus explained in terms of desires such as Gandhi's wish for political reform. Starving yourself is rare, but then so is overcoming your survival instincts.
 
Last edited:

Sucumbio

Smash Giant
Moderator
Writing Team
Joined
Oct 7, 2008
Messages
8,166
Location
Icerim Mountains
Careful! You're referencing a few different theories. Consequentialism defines the moral worth of actions in terms of their results; subjective moral truth characterizes moral relativism. Meanwhile your question attacks desire-utilitarianism (a consequentialist moral theory that defines the moral worth of actions in terms of desire-fulfillment), and I agree that the theory doesn't seem to matter. It sneaks universal morality past Hume at the cost of being trivial. You've said nothing against consequentialism or moral relativism.
Apologies for not being clear... basically his statement (mostly in bold)...

Now, so far we have have established that you cannot derive an Ought from an Is, but there does appear to be a form in which you can derive an ought, that is an Ought from an If. That is, if you want a certain outcome, then it objectively follows that you ought to perform actions to actualize that outcome.
...tells me that one should value the outcome of an action above all else.

We (usually) drink water because we're thirsty (and we want to drink when we're thirsty), and because we know it's good for us (and we want to do things that are good for us). Everything else being normal, nobody would choose to die of thirst. We don't want a philosophy that applies only when things are normal. Moral psychology should coherently explain hunger strikes and suicides, as well as drinking water. Hume does this in terms of competing passions (desires). Hunger striking is thus explained in terms of desires such as Gandhi's wish for political reform. Starving yourself is rare, but then so is overcoming your survival instincts.
Hume makes it impossible for moral truth to be derived from physical evidence, because he defines humans as having a mind-barrier of "wants" that presuppose -everything-. Always. No matter what people do, they're only doing it because of a desire that could technically change on the fly, there is no room in his model for so-called "involuntary" brain activity. I would like to think that science has indeed proven there are instincts that the brain "thinks" all on its own and uncontrollably. And that barring brain-damage, these things are universally the same between all humans. And therefore these things can be used as the basis for a moral framework. But I don't know that could be done...

I also have an issue with deciding on a moral ToE. To me it seems humans change and develop so much over time that only very few moral questions will be answered the same throughout someone's life. Perhaps this means that only a select few moral questions can be ascertained within an all-encompassing moral framework, and that to answer a broader spectrum of questions, one would have to adopt more than one theory, or indeed change theories once or twice or more throughout their lives.

Just as a quick example: a moral question whose outcome boils down to sacrifice yourself or sacrifice others. To a 20 year old, the question seems to be best answered: F you, you die, I've got my whole life to live. Whereas an 80 year old who's lived their life may not be so quick to feel this way. In fact one could say a person's life experience may make them better equipped to even answer moral questions.

...(and, to an extent, subconscious cues like instincts)
I had this idea too, but it seems to get shot down by Hume. I know you said "to an extent" but to what extent? I think it has to either be in or out, and I think if it's in, then is not objective truth back on the table?
 
Last edited:

LarsINTJ

Banned via Warnings
Joined
Jul 8, 2014
Messages
406
Location
Truth is binary, not a continuum.
Is it not getting an "is" from an "ought" by claiming you cannot get an "is" from an "ought"?

As I've stated before, ethics should define evil, what we "ought" not to do if we are to be virtuous. Such guidelines are necessary for virtue, but not sufficient. I cannot automatically call myself a good person after pondering the fact that I did not strangle a homeless man today.

A good person interacts honestly with others while upholding universally consistent values.

(Ignoring obvious violations of the NAP) Murder, theft, r.ape, any moral code which condones these actions as "the good" is inherently invalid. Why?
- The simultaneously affirmation and denial of property rights.
- They all require a lack of consent, if everyone accepts these actions as good then they become consensual and thus contradict their own definitions.
- Positive action can never be considered ethical because it fails universality, i.e. those who are restricted to inaction for whatever reason are condemned. Furthermore, everyone must be logically condemned for any idle moment spent failing to fulfill "the good".

An ethical framework should allow all people to avoid evil at all times, i.e. universally valid.

Remember, evil is simply defined as that which can never be considered good under any circumstance. This basically amounts to initiations of force against the axiomatic right of self-ownership and its external implications.

Since we own our bodies, we also own the consequences of our actions.

Yet ethics are optional. The many human predators among us will not be turned virtuous through recommendation. Nonetheless, evil is only able to operate for as long as it remains covert, to identify evil is to destroy evil. It is crucial for us to abandon our relativistic amoral zeitgeist so that we stop enabling predators, ostracize them.
 
Last edited:

Sehnsucht

The Marquis of Sass
BRoomer
Joined
Feb 9, 2014
Messages
8,457
Location
Behind your eyes.
I had this idea too, but it seems to get shot down by Hume. I know you said "to an extent" but to what extent? I think it has to either be in or out, and I think if it's in, then is not objective truth back on the table?
I'd imagine that things like instinct, and psychological quirks and biases that you may not be consciously aware of, could be measured and quantified, such that you can see just how much of an impact they have in the formulation of a given decision. I have no numbers at hand, not being quite that adept in biology and psychology, which is why I was vague. Without knowing any stats, we can acknowledge that such non-conscious factors do impact our reasoning, reactions, and other elements integral to the moral equation.

Yet we have the experience of making choices and decisions. That's the important part, since it's the agency part of the equation, and thus the only part worthy of moral consideration. Non-sapient animals can't reflect on their own choices; they act on instinct. Can we ascribe moral consideration to the actions of non-sapient creatures? Can I similarly be held morally accountable for elements of psychology or biology of which I have zero conscious awareness? Sounds just as productive as classifying the moral quality of varying eye colours.

I'm not sure I follow your question in the second part. That "carbon-based organisms requires water as their prime chemical solvent" is objectively true, but doesn't inform me, an agent who has the experience of self-reflection, whether I should or should not drink water when I'm thirsty. I have the inclination to want to drink something when I'm thirsty due to physiological demands, but I can deny those urges for whatever reason(s) I choose, no matter how sound or frivolous.

This probably has nothing to do with what you're saying at all. I don't understand what you're trying to get at, even after reading your post in whole. Pls explain. 8(

As I've stated before, ethics should define evil, what we "ought" not to do if we are to be virtuous. Such guidelines are necessary for virtue, but not sufficient. I cannot automatically call myself a good person after pondering the fact that I did not strangle a homeless man today.
IF I want to be virtuous, THEN I should define that which is evil, so that I might avoid evil acts and behaviours. Is this the way of it?

If I don't want to be virtuous, then I wouldn't be inclined to avoid evil acts and behaviours. I might even actively pursue them, if it suited my purposes.

So why should a person seek to be virtuous?

And also, what is the definition of virtuous, in this case?

Note that I'm not necessarily disagreeing with your position, here. Just looking to get a better grasp on the nuts and bolts of the affair (something I'll continue to do down below).

A good person interacts honestly with others while upholding universally consistent values.

(Ignoring obvious violations of the NAP) Murder, theft, r.ape, any moral code which condones these actions as "the good" is inherently invalid. Why?
- The simultaneously affirmation and denial of property rights.
- They all require a lack of consent, if everyone accepts these actions as good then they become consensual and thus contradict their own definitions.
- Positive action can never be considered ethical because it fails universality, i.e. those who are restricted to inaction for whatever reason are condemned. Furthermore, everyone must be logically condemned for any idle moment spent failing to fulfill "the good".

An ethical framework should allow all people to avoid evil at all times, i.e. universally valid.
More questions of clarification:

-So morality is, or should be, about consistency in actions and behaviours across any and all situations?

-What is the NAP?

-By property rights, do we mean that a person is entitled to having something, whether it be themselves, their privacy, their autonomy, their acquired goods and possessions, and/or so forth? In what way is a person entitled to any or all of these things?

-Why should I choose to respect the notion of consent, over not respecting it? To do things with the consent of others versus violating a person's consent? In what way is the former preferable to the latter, or vice-versa?

Remember, evil is simply defined as that which can never be considered good under any circumstance. This basically amounts to initiations of force against the axiomatic right of self-ownership and its external implications.

Since we own our bodies, we also own the consequences of our actions.

Yet ethics are optional. The many human predators among us will not be turned virtuous through recommendation. Nonetheless, evil is only able to operate for as long as it remains covert, to identify evil is to destroy evil. It is crucial for us to abandon our relativistic amoral zeitgeist so that we stop enabling predators, ostracize them.
The Questions Reloaded:

-How is self-ownership axiomatic? And what is self-ownership? Do I "own" myself? What aspects of myself do I own?

-Why is it crucial that we cease to enable "predators" (those that commit or perpetuate evil)? Why is it crucial to abandon our "relatitivistic amoral zeitgeist"?

It seems you've derived an ethical framework from axioms and definitional ramifications. A rationalist model, if you will, as opposed to an empirical one like the consequentialist models myself and the others above have been whittling away at ("empirical" in that it uses factors of human experience and biology as the basis, as opposed to logical inferences).

Interesting stuff, in such a case. And again, I'm not necessarily opposing your model; I'll only be able to do that (should I choose) once I'm sure I've understood the underpinnings of the model. 8D
 

AfungusAmongus

Smash Apprentice
Joined
Jul 27, 2013
Messages
164
Location
Ohio
I also have an issue with deciding on a moral ToE. To me it seems humans change and develop so much over time that only very few moral questions will be answered the same throughout someone's life. Perhaps this means that only a select few moral questions can be ascertained within an all-encompassing moral framework, and that to answer a broader spectrum of questions, one would have to adopt more than one theory, or indeed change theories once or twice or more throughout their lives.

Just as a quick example: a moral question whose outcome boils down to sacrifice yourself or sacrifice others. To a 20 year old, the question seems to be best answered: F you, you die, I've got my whole life to live. Whereas an 80 year old who's lived their life may not be so quick to feel this way. In fact one could say a person's life experience may make them better equipped to even answer moral questions.
Just because you (should) act differently doesn't mean that your moral theory is different. A moral theory should apply to many circumstances including various stages in your life. In your example, a consequential analysis of giving your life depends on two things: your own value, and the expected value gained from your sacrifice. All people have value, but (any life insurance accountant can tell you that) we're not all equally valuable. Moral value is different from financial worth, but you get my point.
 

Sucumbio

Smash Giant
Moderator
Writing Team
Joined
Oct 7, 2008
Messages
8,166
Location
Icerim Mountains
I'm not sure I follow your question in the second part. That "carbon-based organisms requires water as their prime chemical solvent" is objectively true, but doesn't inform me, an agent who has the experience of self-reflection, whether I should or should not drink water when I'm thirsty. I have the inclination to want to drink something when I'm thirsty due to physiological demands, but I can deny those urges for whatever reason(s) I choose, no matter how sound or frivolous.

This probably has nothing to do with what you're saying at all. I don't understand what you're trying to get at, even after reading your post in whole. Pls explain. 8(
Well this is kind of where I said everything else being equal, but that was unclear in and of itself so lets say it another way.

Humans need water to live. They need clothing and shelter to survive in harsh climates. They need to procreate in order for the species to continue, etc. etc. These are all factual statements. However, we cannot say that these things are "good" things to do, because in each instance the facts are themselves not prescriptive. They only exist as statements, empty of true meaning. By saying "we want to live, so therefore we must do X or Y," that points to if-ought moral rationality as the OP suggested.

What I'm proposing is that certain instincts are shared by all humans; common biological imperatives which evolved and that supersede conscious desire in terms of their origin, though they may influence our desires, and which therefore puts these "bio-morals" back in the realm of objectivity. And because of this, certain "moral truths" can be derived from properly identifying what these instincts are, and can therefore be prescriptively doled out and even taught to laymen who have not the time or inclination to identify within themselves these biological sources of social correctness, perhaps even at a young age, no different than learning about the body in school.

When I asked if this was a slippery slope, of course I meant, could this line of reasoning lead to trouble and if so, how? I asked because I'm only me, and I can only think of so much, but with enough people looking at it, maybe the flaws could be fleshed out.

I just want to know if it's possible, too. There's instinct, no doubt. And it plays a role in our decision-making, I've tested this theory IRL just to be sure. People can and will abandon all their platitudes when put to the sword, because at the end of the day, we're mostly cowards, bed wetters, we covet, we get jealous, we fear the unknown, on and on and on. We are what we detest to be in writing, in movies, in fiction. And instead of acknowledging this, we sit upon high horses and proclaim things like "oh, I'd NEVER do that!" So certain, yet no way of knowing.

How long did those soccer players go without eating until they FINALLY gave in and said, ya know what, F it, I'm hungry, this dude's already dead, let's eat him. And how horrible did they feel as their stomachs digested? "We're not human anymore." Seriously? Okay... maybe it -was- a moral conundrum, maybe some of them would rather have been unconscious than have to LIVE with the knowledge they'd become cannibals. Somewhere, deep in their brains, a signal was sent that said "forget what you're thinking, You. Need. To. Eat." and so they did.

So that means cannibalism is OKAY! Right?

How many other things are okay?

Just because you (should) act differently doesn't mean that your moral theory is different. A moral theory should apply to many circumstances including various stages in your life. In your example, a consequential analysis of giving your life depends on two things: your own value, and the expected value gained from your sacrifice. All people have value, but (any life insurance accountant can tell you that) we're not all equally valuable. Moral value is different from financial worth, but you get my point.
I do, but I seem to still be thinking the same question, so maybe I need more clarification.

A young person should be a *insert moral theory* and an old person should be a *insert alternate moral theory*.

Yes or no?

To me, it seems plausible that Yes would be the answer, because of the fact that young people value things so differently, but I suppose it could be that moral frameworks need to be less specific, so that it -can- apply to all age groups. Or genders, or races, or ethnicities, etc etc etc.
 
Last edited:

LarsINTJ

Banned via Warnings
Joined
Jul 8, 2014
Messages
406
Location
Truth is binary, not a continuum.
IF I want to be virtuous, THEN I should define that which is evil, so that I might avoid evil acts and behaviours. Is this the way of it?
Yes.

If I don't want to be virtuous, then I wouldn't be inclined to avoid evil acts and behaviours. I might even actively pursue them, if it suited my purposes.

So why should a person seek to be virtuous?
In order to happily co-exist with other virtuous people.

And also, what is the definition of virtuous, in this case?
Virtuous: To be guided by a universally consistent decision-making methodology. Both capable of supporting one's own needs as well as the needs of others you have committed to.

Traits like honesty, courage, compassion and respect all inevitably follow if a virtuous person is to live their values.

So morality is, or should be, about consistency in actions and behaviours across any and all situations?
If someone is to be objectively virtuous, then they must follow an objective ethical framework. An objective ethical framework can only define evil actions, not good ones. Specific definitions of good behavior will always be subjective due to conflicting circumstances and interests. An action not immediately classified as evil is inherently neutral - it could either be something nice like a surprise gift or something cruel like ignoring someone in need of assistance.

What is the NAP?
The Non-Aggression Principle
1. Do not initiate murder
2. Do not initiate theft
3. Do not initiate misinformation
All three are permissible in the case of self-defense against an external initiator, but only to match the force imposed upon you. A greater retaliation is not acceptable.

The NAP applies to everyone.

By property rights, do we mean that a person is entitled to having something, whether it be themselves, their privacy, their autonomy, their acquired goods and possessions, and/or so forth? In what way is a person entitled to any or all of these things?
The NAP is based on the axiom of self-ownership. If we own our bodies we must also own the consequences of our actions.

If I plant a seed and a tree grows, that tree belongs to me because it would not exist without my input.

Money represents productivity and energy expenditure, a consequence of human effort.

Let's say the new tree I planted bears fruit, someone else may wish to trade their effort (money) for a piece of my effort (fruit).

Why should I choose to respect the notion of consent, over not respecting it? To do things with the consent of others versus violating a person's consent? In what way is the former preferable to the latter, or vice-versa?
Why is consent important? Disregarding the NAP for now, imagine a typical thief who makes a living off stealing the consensual productivity of others. How successful do you think this thief would be if everyone else in the world was also a thief?

"It is universally preferable to violate property rights"
vs.
"It is universally preferable to respect property rights"

The former is not universally consistent for the three reasons I gave in my initial post.
- Simultaneous denial and affirmation of property rights. A thief expects to keep what they have stolen.
- The total acceptance of property right violation flips to become consensual. If everyone in the world is a r.apist then it is no longer r.ape.
- It morally condemns inaction despite a lack of choice, the foundation of ethics.


There are no contradictions raised by the latter, so it is worthy of consideration.

The Questions Reloaded:

-How is self-ownership axiomatic? And what is self-ownership? Do I "own" myself? What aspects of myself do I own?
One cannot argue against self-ownership without demonstrating and accepting it. Whose mouth is speaking? Whose fingers are typing? Whose mind are they trying to change?

-Why is it crucial that we cease to enable "predators" (those that commit or perpetuate evil)? Why is it crucial to abandon our "relatitivistic amoral zeitgeist"?
The complete rejection of human predators is crucial if the future is to be happy and sustainable for those who are virtuous. Relativism enables evil through an irrational denial of absolutes, it creates a befuddling ideological fog of moral insanity.

It seems you've derived an ethical framework from axioms and definitional ramifications. A rationalist model, if you will, as opposed to an empirical one like the consequentialist models myself and the others above have been whittling away at ("empirical" in that it uses factors of human experience and biology as the basis, as opposed to logical inferences).
This method of assessing any ethical proposition to determine evil from neutral is empirical at its core. Logic is derived from the consistency of external matter.

Interesting stuff, in such a case. And again, I'm not necessarily opposing your model; I'll only be able to do that (should I choose) once I'm sure I've understood the underpinnings of the model. 8D
Well, I'm simply relaying what I understand about an ethical theory called UPB (universally preferable behavior) by Stefan Molyneux. Check him out if you're interested.

Whereas the NAP is quite old because it is intuitive, it's just that society has a bad habit of creating exceptions for whatever self-centered justification masquerading as a vague "greater good".
 
Last edited:

AfungusAmongus

Smash Apprentice
Joined
Jul 27, 2013
Messages
164
Location
Ohio
A young person should be a *insert moral theory* and an old person should be a *insert alternate moral theory*.
(emphasis mine)

You appear to be presupposing some ethical code that applies to both young and old people. This is a good thing. Inconsistent ethical judgments makes us hypocrites, and flip-flopping ethical motivations can be self-defeating.
 
Last edited:

Sucumbio

Smash Giant
Moderator
Writing Team
Joined
Oct 7, 2008
Messages
8,166
Location
Icerim Mountains
(emphasis mine)

You appear to be presupposing some ethical code that applies to both young and old people. This is a good thing. Inconsistent ethical judgments makes us hypocrites, and flip-flopping ethical motivations can be self-defeating.
Excellent. So then what if any classic framework can do this? And can it be tied in any way to our instincts?

What is the NAP?
It's what i take after read this ****.

Sorry I couldn't resist ☺
 
Last edited:

AfungusAmongus

Smash Apprentice
Joined
Jul 27, 2013
Messages
164
Location
Ohio
Excellent. So then what if any classic framework can do this?
Just about any classic theory can adapt to different stages of life, although some better than others. Deontology is fairly rigid since it's based on universal rules, while Utilitarianism adapts easily (critics would say too easily) since your ability to promote happiness clearly varies with age.

Hume was keen on showing that many of our strongest beliefs are based on habit or custom, such as (his favorite target) our belief in causality. But he knew the importance of instinct as well:

Hume said:
though animals learn many parts of their knowledge from observation, there are also many parts of it which they derive from the original hand of nature; [...] and in which they improve little or nothing by the longest practice and experience. These we denominate INSTINCTS, and are so apt to admire [...]. But perhaps our wonder will - perhaps - cease or diminish, when we consider that the experimental reasoning itself, which we possess in common with beasts, and on which the whole conduct of life depends, is nothing but a species of instinct
Hume includes both conscious and subconscious instincts. This seems to leave room for important moral instincts, in the form of subconscious passions and also in the form of conscious inferences that both come naturally to us.
 

Sehnsucht

The Marquis of Sass
BRoomer
Joined
Feb 9, 2014
Messages
8,457
Location
Behind your eyes.
This post is long, so I'll collapse responses based on the correspondent.

[collapse=SUCUMBIO SPRACHS]
Well this is kind of where I said everything else being equal, but that was unclear in and of itself so lets say it another way.

Humans need water to live. They need clothing and shelter to survive in harsh climates. They need to procreate in order for the species to continue, etc. etc. These are all factual statements. However, we cannot say that these things are "good" things to do, because in each instance the facts are themselves not prescriptive. They only exist as statements, empty of true meaning. By saying "we want to live, so therefore we must do X or Y," that points to if-ought moral rationality as the OP suggested.
Indeed. If we want to live, we should work toward securing our survival.

We tend to want to live, so the above statement (if we want to live) tends to apply for most. Hence why most do work toward securing their survival.

What I'm proposing is that certain instincts are shared by all humans; common biological imperatives which evolved and that supersede conscious desire in terms of their origin, though they may influence our desires, and which therefore puts these "bio-morals" back in the realm of objectivity. And because of this, certain "moral truths" can be derived from properly identifying what these instincts are, and can therefore be prescriptively doled out and even taught to laymen who have not the time or inclination to identify within themselves these biological sources of social correctness, perhaps even at a young age, no different than learning about the body in school.

When I asked if this was a slippery slope, of course I meant, could this line of reasoning lead to trouble and if so, how? I asked because I'm only me, and I can only think of so much, but with enough people looking at it, maybe the flaws could be fleshed out.

I just want to know if it's possible, too. There's instinct, no doubt. And it plays a role in our decision-making, I've tested this theory IRL just to be sure. People can and will abandon all their platitudes when put to the sword, because at the end of the day, we're mostly cowards, bed wetters, we covet, we get jealous, we fear the unknown, on and on and on. We are what we detest to be in writing, in movies, in fiction. And instead of acknowledging this, we sit upon high horses and proclaim things like "oh, I'd NEVER do that!" So certain, yet no way of knowing.

How long did those soccer players go without eating until they FINALLY gave in and said, ya know what, F it, I'm hungry, this dude's already dead, let's eat him. And how horrible did they feel as their stomachs digested? "We're not human anymore." Seriously? Okay... maybe it -was- a moral conundrum, maybe some of them would rather have been unconscious than have to LIVE with the knowledge they'd become cannibals. Somewhere, deep in their brains, a signal was sent that said "forget what you're thinking, You. Need. To. Eat." and so they did.

So that means cannibalism is OKAY! Right?

How many other things are okay?
So basically, we can determine what is most optimal for the human being by quantifying the "bio-ethical" imperatives engraved in their programming. From these imperatives, we would derive an objective set of Bio-Morals.

One Bio-Moral could be "It is good to eat to survive". Therefore, cannibalism would be an acceptable course of action, since it satisfies that criteria.

I think I may have already touched on this earlier. We are capable of denying our "bio-ethical imperatives", since we are self-aware, and are thus capable of self-direction (to the degree that non-conscious factors permit conscious self-direction). Who cares if, as living things, we need to eat? The answer will be whether we collectively (or individually) decide to care about our programming or not.

If we all got together and decided cannibalism was "good", then it would be "good". Without an objective referent to bind our capacity to make choices, we could make such an accord. But it is an arbitrary one, because we could all just as easily collectively agree that cannibalism is "bad".

I'm getting the sense that this is at the heart of your concern. Any ethical model that is built from scratch from the ground up without any presupposed objective referents will be arbitrary, in that we're making subjective decisions that could just as easily turn out one way or another. That we decide that cannibalism is "good" is just as arbitrary as saying that it is "bad".

However, that we can acknowledge that any model we construct is arbitrary and subjective does not mean that such models are without meaning or pragmatic value. They can still mean something to us, much as life can mean something to us despite life have no substance of meaning (much as facts have no substance of morality).

And I believe that the majority of sound persons will choose a moral framework in which mutual collective fulfillment of desires is viewed as an objectively more efficient and advantageous way to fulfill individual desire, than everyone going at it on their own.

And we can see this in action; societies and civilizations require pro-social accords between people and communities. We would likely have not made it to the point we are now without having mutually agreed that cooperation was more advantageous in the long term than absolute individual autonomy.

This pro-social ethical framework may be arbitrary. But I suspect that it appeals to your basic existing desires as a human being (life, liberty, security, etc.), and it appeals to my basic desires, so we can both agree to call it "good". ;)

(NOTE: We can use factual "bio-ethical imperatives" to derive an ethical model, but that such a model happens to parallel our existing pro-social wiring doesn't necessarily mean that our model is an objective morality rooted in "Bio-Morals" (nor that "Bio-Morals" even exist). That we have "Bio-Morals" only matters insofar as we agree that they do, for otherwise, they don't).

I do, but I seem to still be thinking the same question, so maybe I need more clarification.

A young person should be a *insert moral theory* and an old person should be a *insert alternate moral theory*.

Yes or no?

To me, it seems plausible that Yes would be the answer, because of the fact that young people value things so differently, but I suppose it could be that moral frameworks need to be less specific, so that it -can- apply to all age groups. Or genders, or races, or ethnicities, etc etc etc.
If I might chime in, I'd say that a consequential framework would apply universally across all ages. It is simply that different age demographics may have different desires/goals (due to life experience disparities, etc.), and may thus consider current and future variables for a given situation in a different light.

It's what i take after read this ****.

Sorry I couldn't resist ☺

[/collapse]

[collapse=LARSINTJ HAS MUCH THINGS TO SAY]
Yes.

In order to happily co-exist with other virtuous people.

Virtuous: To be guided by a universally consistent decision-making methodology. Both capable of supporting one's own needs as well as the needs of others you have committed to.

Traits like honesty, courage, compassion and respect all inevitably follow if a virtuous person is to live their values.

If someone is to be objectively virtuous, then they must follow an objective ethical framework. An objective ethical framework can only define evil actions, not good ones. Specific definitions of good behavior will always be subjective due to conflicting circumstances and interests. An action not immediately classified as evil is inherently neutral - it could either be something nice like a surprise gift or something cruel like ignoring someone in need of assistance.
Got it. 8)

The Non-Aggression Principle
1. Do not initiate murder
2. Do not initiate theft
3. Do not initiate misinformation
All three are permissible in the case of self-defense against an external initiator, but only to match the force imposed upon you. A greater retaliation is not acceptable.

The NAP applies to everyone.
Does it apply to everyone by virtue of the NAP's definition (which says it applies to everyone)? Or is there a logical proof (or series of proofs) that support that claim (i.e. A, B, C, therefore the NAP applies to everyone)?

It might be useful for the record to distinguish whether the NAP's universality is an axiom that your proposing, or if it's the product of syllogistic deduction.

The NAP is based on the axiom of self-ownership. If we own our bodies we must also own the consequences of our actions.

If I plant a seed and a tree grows, that tree belongs to me because it would not exist without my input.

Money represents productivity and energy expenditure, a consequence of human effort.

Let's say the new tree I planted bears fruit, someone else may wish to trade their effort (money) for a piece of my effort (fruit).
If we own our bodies, then we own the consequences of our actions, and the output of our efforts.

A sensible definition. Though as I relate below, I wonder about Is-Oughtisms.

Why is consent important? Disregarding the NAP for now, imagine a typical thief who makes a living off stealing the consensual productivity of others. How successful do you think this thief would be if everyone else in the world was also a thief?

"It is universally preferable to violate property rights"
vs.
"It is universally preferable to respect property rights"

The former is not universally consistent for the three reasons I gave in my initial post.
- Simultaneous denial and affirmation of property rights. A thief expects to keep what they have stolen.
- The total acceptance of property right violation flips to become consensual. If everyone in the world is a r.apist then it is no longer r.ape.
- It morally condemns inaction despite a lack of choice, the foundation of ethics.


There are no contradictions raised by the latter, so it is worthy of consideration.
If everyone in the world was a thief, consistently and mutually stealing from one another, then global success (if we define success as maximal retention of one's goods) would be low.

Yet what does it matter that people are on the whole successful or not? That everyone is a consent-violating thief or not?

I don't want to have my stuff stolen, and I imagine that you don't want to have your stuff stolen, either. But what do our desires matter?

They would only matter if we all agree that they do. And universal consistency in one's ethical approach similarly only matters if we all decide that universality matters. Because you could show up at my door and extol the virtues of being virtuous, only for my to flip you the bird and kick you in the crotch.

It seems to me that because we have agency (or the subjective experience thereof), there is no imperative to abide by any objective moral standard, because I can abide or reject it for whatever reason(s) I care to offer (and abide/reject on a consistent or inconsistent basis, as per my prerogative).

Your model is internally consistent, and everything follows from the base axioms, and so I'd say that it's a strong and valid ethical theory. But if I don't care about axioms and definitions and syllogistic deduction and inference, then the model doesn't amount to much, pragmatically speaking.

I'll cover below the route I've taken in trying to construct my own working moral hypothesis. Because I'm not saying you're wrong; I'm just concerned about the objective-subjective distinction (and the Is-Ought distinction as well, since that's the thread topic). So bear with me for a little longer.

One cannot argue against self-ownership without demonstrating and accepting it. Whose mouth is speaking? Whose fingers are typing? Whose mind are they trying to change?
My mouth speaks the words I speak, and my fingers type the words typed here. And it is my mind that would be trying to change another's mind (though do note that in this case, I'm not trying to change your mind of anything, nor have I yet engaged in any real defense of any of my proposed models).

Does the fact that I have typed this sentence mean that I own it? Does the fact that I engraved a message on the wall mean that I own it? Does the fact that my heart is currently pumping means that it deserves to not be shanked by a knife?

This seems tied to the Is-Ought distinction. If we define "self-ownership" as "those things which pertain to an agent", then yes, I "own" myself and my words and deeds. Yet how might you derive an Ought ("You Shall Not Violate Self-Ownership") from an Is ("All Agents Possess Self-Ownership")?

So I agree that you can't argue against the fact of self-ownership. This is a matter of definitions, so it's axiomatically true that we all have self-ownership. The concerns arise when you attempt to derive prescriptive moral edicts from this axiom.

The complete rejection of human predators is crucial IF the future is to be happy and sustainable for those who are virtuous. Relativism enables evil through an irrational denial of absolutes, it creates a befuddling ideological fog of moral insanity.
Would this not be relative? This is a conditional statement; it could easily be otherwise. Rejecting human predators only matters if we all desire (and thereby, agree to pursue) a future that is happy and sustainable (which, we can further come to agree, requires us to hold to a standard of virtuous living).

What if none of us want a future that is happy and sustainable? Would we still have any incentive to eschew Evil?

If you were to ask people, most will likely say that they do in fact desire a happy and sustainable future for themselves. I certainly find such a future appealing. But that only speaks to the primacy of desire as the impetus of human action.

There is no necessary reason that the human species should or should not be overall happy. Or whether it should or should not flourish. Or even whether it should or should not exist in this universe (or any other). We all want to be happy and flourish, but who cares about what we want? Are we owed what we want simply by the fact of having wants?

I again come to suspect that there is a more fundamental axiom at play here than the NAP or Self-Ownership or the Virtue of Universality -- the axiom in which in order for any of this to matter, we must first agree, mutually and collectively, that any of this does matter.

This method of assessing any ethical proposition to determine evil from neutral is empirical at its core. Logic is derived from the consistency of external matter.
Mm, yes. I was just trying to make a distinction between the way these ideas seem to have come about. You've used axioms, definitions, and derivations thereof (by reason alone), whereas I've been toying with factoring variables and constants of human experience, as molded and directed by biological and cultural evolution, self-awareness, and so on.

Naturally, logic parallels reality, since the former is modelled after the latter, and the latter appears to be causal and consistent (thus making logic causal and consistent).

Well, I'm simply relaying what I understand about an ethical theory called UPB (universally preferable behavior) by Stefan Molyneux. Check him out if you're interested.

Whereas the NAP is quite old because it is intuitive, it's just that society has a bad habit of creating exceptions for whatever self-centered justification masquerading as a vague "greater good".
So that site has free materials on tons of philosophical matters? And the page itself has a bunch of stuff pertaining to Molyneux? Neat. I see there are PDF files aplenty, so it would make for convenient reading. I'll put the page in my Bookmarks for later use.

Anyway, I said above that I'd outline what I have thus far by way of moral speculation, so here it is for the record:

[collapse=Sehn's Propositions]
My current working model was derived from a matter of investigation. You first have to try to define the concept of morality (it concerns choices, and which we should make), then determine the foundation of choice (it is the agent that makes choices), then determine the impetus of choice (desires and values are what drive human action).

This alone tells you nothing about what you should or should not do; it merely establishes the mechanism of action and consequence. People do things because they want something, and so act in such a way so as to fulfill their desire(s). It is factual that people have desires, and tend to be aware of them, and tend to act upon them. We can use these facts to derive a standard of moral prescriptions.

If everyone tends to want the same thing, then it is objectively the case that cooperation will increase the probability that you and I will satisfy our individual desires, than if we sought to satisfy them on our own (or even at the expense of one or both of us). If I:

>Want what I want;
>Tend to want to get what I want;
>Recognize that others want what they want, and tend to want to get what they want;
>Acknowledge that pro-social approaches are more efficient, advantageous, and sustainable in the long-term for the fulfillment of what I want;
>THEN we should engage in a pro-social approach of mutual cooperation, wherein the fulfillment of our collective wants are maximized, and factors that impede those collective wants are minimized.

We can label that which maximizes global fulfillment as that which is "good", or what we "should" do, and that which minimizes or impedes global fulfillment as that which is "bad", or what we "should not" do.

It doesn't have to be this way. No one has any obligation to adhere to such a model of living. But I believe that most people want to maximize the likelihood of fulfilling their desires, and will thus band together in a pro-social manner, whereas those that don't care to follow this model will be in the minority. What will invariably happen is that this minority, in trying to impede the desires of this pro-social majority, will be met with resistance. Which is the case in the collective arrangements we call "societies".

If we all want the same things (e.g. life, liberty, security, etc.), then it is in our interest to eschew, minimize, and discourage actions, behaviours, and views that impede those things -- which will include dealing with individuals who exhibit such traits.

This model neither prescribes how we might go about fulfilling life/liberty/security, nor which of these possibilities are "good" or "bad" -- because it is we who define "good" and "bad", for better or worse.

You may say that this whole line of reasoning is relativistic. And it is. It is arbitrary, because it could just as easily be any other way.

But human existence itself is arbitrary, since we could well not exist. Yet we do exist, and we do have desires, and these desires do drive our actions. What we choose to do with these facts is up to us.

The problem, if there is one, is that people don't tend to sit down and actually talk these issues out. Or perhaps, don't tend to investigate and scrutinize both their own convictions/beliefs/views, and the subject of ethics and morality in themselves. So we have a mishmash of conflicting ideals, in which pro-sociality is not necessarily the dominant or most desired approach to conducting ourselves.

This is neither a good or a bad thing. And will continue to be until we all decide it is a good or a bad thing.

And I could certainly try to extol "Desire-Impetus Pro-Social Consequentialism" as the optimal ethical model.

But only if I desire to do so.
[/collapse]

So that's what I've been mulling over these past few months, ever since I tried to put some actual thought in determining where I might stand in terms of Ethics. It all rests on the primacy of human desire as the driver of action -- because if we didn't truly want anything, we wouldn't have the incentive to do anything. And if morality concerns actions, and no actions are taking place, then morality becomes void and irrelevant (much as how an objective standard of morality, divorced from the human experience, would amount to nothing if no humans, or even agents in general, existed).

I find it appealing, since it has pragmatic value and is relevant (and rooted in) human experience. If morality concerns the human being, then it would be necessary to account for the variables that influence human action. Better, then, to work along the grain of the human being, and not against it, I say.

Do note that I'm terming this a "working" model, since I haven't formalized it to sufficient degree (in my estimation). I've come some way, but I'm not sure I've reached the end of my contemplations, so to speak. Do feel free to comment on the principles of this model, ask questions, underscore flaws or issues or raise objections, etc.

In the end, though, it seems both of our models yield generally the same result. That is, we'll for the most apart agree on what is ethical conduct and unethical conduct. Which, I would propose in my pragmatism-fetishist way, matters more than which model is technically correct (or barring that, more optimal). 8DDDDD
[/collapse]

Geez, what an unwieldy and misbegotten post.

You guys bring out the worst in me.
 
Last edited:

Sucumbio

Smash Giant
Moderator
Writing Team
Joined
Oct 7, 2008
Messages
8,166
Location
Icerim Mountains
"The answer will be whether we collectively (or individually) decide to care about our programming or not."

Well, okay, and this is true, but I think it's irrelevant to the notion of a moral framework. If we're to construct one, it'd have to be statements that all start with "in general, it's best to" *insert action here*, or it's kinda, well worthless, right? I dunno, maybe not, but it seems like it is. If a moral framework should be universal, then its prescriptive narrative is probably going to reduce toward zero as more and more people are polled "what would you do in situation X." Like if we asked a million people to solve the Trolley problem, or something.

Also, I think the notion that desires interrupt factual bases can be overlooked. Some facts, are just too important to allow human desire to reduce it to arbitrary. This idea that we don't want morals to only apply to normal circumstances tells me we need something else beside morals to attend to those things that DO exist in a "normal" world. Sure we can have morals. Lars' 3 sound great. But I want a 10 commandments, that isn't based on some hokey religion (though you gotta hand it to the Jews, those 10 rules are mostly good ones to live by, but then some are kinda, huh?). I want one that's based on biology. Scientifically demonstrable as effective all the time *so long as the setting is right* ... cause yeah, saying "Thou shalt not steal" is well and good, but if you're starving to death on the streets, should you just die, or should you risk stealing that loaf of bread? Obviously you're going to find some people who are all fine and dandy with stealing the bread. So it's not that kind of commandment. It's more like: well I don't know, what could one be? I think part of the problem here is that there are so many "moral" questions and answers that ought not to be moral dilemmas but instead, common sense dilemmas.
 
Last edited:

Sehnsucht

The Marquis of Sass
BRoomer
Joined
Feb 9, 2014
Messages
8,457
Location
Behind your eyes.
"The answer will be whether we collectively (or individually) decide to care about our programming or not."

Well, okay, and this is true, but I think it's irrelevant to the notion of a moral framework. If we're to construct one, it'd have to be statements that all start with "in general, it's best to" *insert action here*, or it's kinda, well worthless, right? I dunno, maybe not, but it seems like it is. If a moral framework should be universal, then its prescriptive narrative is probably going to reduce toward zero as more and more people are polled "what would you do in situation X." Like if we asked a million people to solve the Trolley problem, or something.
So you're saying a moral system is only worth as much as it can effectively be applied for a broad set of circumstances. As Lars put it, moral systems that are more consistent in application will tend to yield better and more consistent results. And I'd agree.

A universal prescriptive model would only be useful if it could account for every variable and their combinations. So that if you're presented with a scenario with variable combination X, you'd know to apply solution Y. The issue is that we're likely not capable of codifying a standard to such a finely-tuned extent.

So we have to work on a case by case basis. We can likely derive general solutions by taking general cases and applying your metrics to them (e.g. a pro-social metric). So for the concept of theft, you could say that theft impedes the fulfillment of collective desire (by denying someone their goods), so we should discourage theft as a desirable practice. But each case of theft would have variables specific to them (e.g. the motive of theft, the way it was carried out, the extent and nature of that which was stolen, etc.), so we'd have to consider them every time for each case before determining how to proceed.

I've not yet done the legwork of trying to apply my formative model to general cases of moral issues. But if we're using a Pro-Social Desire-Driven Consequential Framework***, then prescriptive edicts for general cases could certainly be derived, and thereafter applied by practitioners of the model on a (generally?) universal basis.

Why don't you give me a short list of ethical/moral/social/philosophical/etc. issues to which moral scrutiny could be applied. Maybe putting this consequential view in practice will help to elucidate your concerns. I had thought you were expressing concerns of how to derive an objective basis from these subjective and otherwise-arbitrary notions; if you just want to know how one might go about applying these ideas in the real world, then better to put the principles to the test.

***Featuring Dante from Devil May Cry!

Also, I think the notion that desires interrupt factual bases can be overlooked. Some facts, are just too important to allow human desire to reduce it to arbitrary. This idea that we don't want morals to only apply to normal circumstances tells me we need something else beside morals to attend to those things that DO exist in a "normal" world. Sure we can have morals. Lars' 3 sound great. But I want a 10 commandments, that isn't based on some hokey religion (though you gotta hand it to the Jews, those 10 rules are mostly good ones to live by, but then some are kinda, huh?). I want one that's based on biology. Scientifically demonstrable as effective all the time *so long as the setting is right* ... cause yeah, saying "Thou shalt not steal" is well and good, but if you're starving to death on the streets, should you just die, or should you risk stealing that loaf of bread? Obviously you're going to find some people who are all fine and dandy with stealing the bread. So it's not that kind of commandment. It's more like: well I don't know, what could one be? I think part of the problem here is that there are so many "moral" questions and answers that ought not to be moral dilemmas but instead, common sense dilemmas.
So are you proposing a challenge or a game in which we should try to derive a series of Objective Commandments derived from facts of biology and psychology? That could be fun times.

I don't seek to ignore facts of human existence, or render them irrelevant to the moral equation; I'm just using them to inform my speculation. You can acknowledge that they exist, and choose to work with them or against them (or not acknowledge them, I suppose). I find it more interesting (and more rational) to work with them.

This seems related to your prior concern of estimating the worth and/or rigour of an ethical model by how consistently it can be applied, and how consistent are the results which are yielded. And I agree that this is a reasonable thing to expect from an ethical model.

Fortunately, we have objective knowledge on the universality of common factors of human experience (as opposed to, say, objective knowledge of Standards of Divine Fiat). If you're looking for an objective foundation to provide some measure of universality and consistency in your ethical precepts, then facts of human biology and psychology are as good a thing to focus on than any (and things utterly relevant to our experience to boot).
 
Last edited:

Sucumbio

Smash Giant
Moderator
Writing Team
Joined
Oct 7, 2008
Messages
8,166
Location
Icerim Mountains
lol DMC is awesome, but I like Castlevania: LoS better >P

So you're saying a moral system is only worth as much as it can effectively be applied for a broad set of circumstances. As Lars put it, moral systems that are more consistent in application will tend to yield better and more consistent results. And I'd agree.
Yeah, believe it or not I think @ LarsINTJ LarsINTJ is onto something... I prefer this minimalist approach to moral theory because it can be applied broadly across many people ... it need not be 1000s of pages long with tons of if/then statements for Every. Damn. Thing. I know you just said "so we have to work on a case by case basis" but I think that actually we don't IF we're talking about the overarching moral principle. The underlying moral question/answer that does take pages of debriefing and a congressional hearing, well that's secondary, and that's where I think human desire, ergo Hume, comes in to play.

For instance, let's go with the OP's example. There is a wall in front of me. My desire to not walk into the wall because it will hurt, means I will not walk forward. Knowing there is a wall in front of me (a fact) tells me nothing in and of itself, only that if I want to walk forward without getting hurt, I must change my desire.

What I'm suggesting is that there IS a prescriptive truth attached to the observation "there is a wall in front of me." That truth is that walls are hard. Desire then follows, I don't want to walk into hard things because it hurts (not only walls hurt when walked into, anything hard hurts). So the original Fact, hard-thing, told me what I desired. My choice was made for me, really. If I were to give it a second thought, I could decide to hurt myself, but now we're a desire twice removed.

I think that we could examine life and discover a whole world of facts based on empirical data that could indeed prescribe a model for living "right," by following this premise. And I think it could avoid being arbitrary (lets agree that subjective<=>arbitrary for this discussion). True we are our own masters, I can choose (desire) to hurt myself, but this is perverse. I think the original "gut" reaction "that's gonna hurt, don't do it" that your brain tells you is the REAL response, and anything else is learned second-hand, through conditioning, brainwashing, practice, meditation, drugs, whatever.

This goes back to "everything else being normal." I know we don't want "normal" it's an ugly word. So I guess I could think of a different word, but I dunno what word that would be. Default? Perhaps...

So are you proposing a challenge or a game in which we should try to derive a series of Objective Commandments derived from facts of biology and psychology? That could be fun times.
Yeah, or rather, just a set of (I'm stealing this from whoever else has used it) First Principles. Facts that tell us something without the luxury of conscious intellect and thereby "desire" getting in the way... nothing to -interpret- the data, it's just immediate knowledge. The same way an animal just KNOWS not to eat the blue flower cause it's poison to them. Instincts. They really are a separate faction of knowledge, as far as I can tell. And they are far more important than these moral theorists want to admit.
 

Sehnsucht

The Marquis of Sass
BRoomer
Joined
Feb 9, 2014
Messages
8,457
Location
Behind your eyes.
Yeah, believe it or not I think @ LarsINTJ LarsINTJ is onto something... I prefer this minimalist approach to moral theory because it can be applied broadly across many people ... it need not be 1000s of pages long with tons of if/then statements for Every. Damn. Thing. I know you just said "so we have to work on a case by case basis" but I think that actually we don't IF we're talking about the overarching moral principle. The underlying moral question/answer that does take pages of debriefing and a congressional hearing, well that's secondary, and that's where I think human desire, ergo Hume, comes in to play.
A simple, efficient, consistent model whose application is both broad and deep is the ideal model. No contest there.

And I'm not necessarily saying that, faced with a crisis, I need to call a time-out, take out my notepad and calculator, and work out the math of this moral dilemma I'm stuck in before going any further. I'm just saying that there are basic classes of situations we could find ourselves in, but each case has slight differences in variables that would best be accounted for, if you want to make the optimal decision (whatever that might be). You can't always have all the information, but you have to work with what you got.

It is important to note that all of my IF-THEN talk of conditional relations doesn't account for "right" and "wrong", because that talk of conditional relations simply provides the mechanism of morality (i.e. how and why choices are made, with "choices" as the "stuff" of morality). Sure, you can default to an IF-THEN conditional assessment for every decision big and small, but this is a rather inefficient way to go about things.

A pro-social ethical standard serves as shorthand for the most efficient expression of this IF-THEN conditional mechanism. We can derive rules of conduct that can apply to a wide variety of cases by acknowledging objective commonalities in desires, and how it's more rational to collectively and mutually help one another attain them, IF we want to (and we generally do want to, so we have our question answered already).

We can codify into canon a number of basic rules of application for different situations we can conceive. The basis for determining what course of action should be taken will be contingent on maximizing desire-fulfillment and minimizing desire-denial. Once you've engraved these "IF situation X, THEN do Y" rules of application into stone tablets, you can go about your life without having to always pause to do IF-THEN calculations on your ethical abacus. These edicts, then, become shorthand referents for all this conditional mechanism talk (e.g. "Generally speaking, I shouldn't steal things" is shorthand for "stealing impedes desire fulfillment and everyone has the desire of wanting to keep their things so stealing lowers maximal desire fulfillment on the whole blah blah blah").

The goal is to derive a useful ethical standard without appealing to presupposed standards (such as those exhibited in certain theologies). All I've been doing is going on and on about the figurative math behind such statements as "Treat people nicely", or "Don't punch people in the nips", or "Help others if you can", and so forth.

So in short, my position as of the present is that:

A) The crux of moral substance is in the consequences of acts, which are perpetuated by agents, whose actions are driven by what they want;
B) That it is possible derive a standard of what we should and should not do, based on the facts involved,, and which is consistent and has useful applications;
C) That this standard, ultimately, resembles a consequential model in which pro-social acts and behaviours are encouraged (which we can label as "right"), and anti-social ones discouraged (which we can label as "wrong"); a standard that places value in the Platinum Rule, respects such things as the NAP and Self-Ownership and Consent, and so forth.

Is there anything in this set of assertions that strikes you as controversial or unfounded? Is item C) not the endpoint we're all trying to attain through our present ethics-brainstorming session?

For instance, let's go with the OP's example. There is a wall in front of me. My desire to not walk into the wall because it will hurt, means I will not walk forward. Knowing there is a wall in front of me (a fact) tells me nothing in and of itself, only that if I want to walk forward without getting hurt, I must change my desire.

What I'm suggesting is that there IS a prescriptive truth attached to the observation "there is a wall in front of me." That truth is that walls are hard. Desire then follows, I don't want to walk into hard things because it hurts (not only walls hurt when walked into, anything hard hurts). So the original Fact, hard-thing, told me what I desired. My choice was made for me, really. If I were to give it a second thought, I could decide to hurt myself, but now we're a desire twice removed.

I think that we could examine life and discover a whole world of facts based on empirical data that could indeed prescribe a model for living "right," by following this premise. And I think it could avoid being arbitrary (lets agree that subjective<=>arbitrary for this discussion). True we are our own masters, I can choose (desire) to hurt myself, but this is perverse. I think the original "gut" reaction "that's gonna hurt, don't do it" that your brain tells you is the REAL response, and anything else is learned second-hand, through conditioning, brainwashing, practice, meditation, drugs, whatever.

This goes back to "everything else being normal." I know we don't want "normal" it's an ugly word. So I guess I could think of a different word, but I dunno what word that would be. Default? Perhaps...
So facts about reality can (and often do) inform desire. Because instinct is triggered by stimulus; without exposure to stimulus, instinct does nothing. Much as how facts in themselves do nothing, unless they are plugged into an equation of subjectively considered outcomes.

Though are we not conditioned (by wiring, and reinforced by experience) to want to avoid walking into walls, as an extension of wanting to avoid pain of any kind (since pain tends to be deleterious to bodily function)? In such a case, how does wall-hardness possess any prescriptive value?

Pleasure refers to all things that are beneficial to the functioning of the organism; pain refers to all things that are deleterious to the functioning of the organism. Pain is the root of avoidance, and pleasure the root of pursuit. All desires must then stem from some combination of those binaries (as do emotions, which are reactions to stimulus, offering reward or price for a given experience).

You are correct that, while I could in principle choose to ignore my "brain cells" telling me the wall will hurt me if I collide with it, that I will not be inclined to want to countermand that impulse. And most people wouldn't. That's the key to my whole consequential spiel; a consistent ethical standard could only be derived from human agents if they themselves are informed by consistent innate norms. And this consistent set of norms are the base, primordial gut instincts about how we deal with the world.

Yet none of this prescribes anything, since I can in principle ignore all of that. We could infer a Model of Optimal Human Experience through the compilation of empirical data relating to the human being. But doesn't this just bring us back to the issue of Primacy of Desire, which is the inevitable consequence of having the capacity (for the experience) of choice? We can agree that most people won't ignore their basic urges and instincts, but in principle, we can. It would be better to account for this instead of brushing it under the rug.

Hence why I've come to a Primacy of Desire axiom, and how it's the driver of action (and thus, morality) -- and not bio-ethical imperatives like "avoid the hardness of walls", which may inform desire, but aren't part of the process of thinking about what to do next. And this, because we can consciously reflect on our desire, but not on non-conscious generations of spontaneous knowledge (like instincts) -- because they circumvent the process of thought entirely.

Yeah, or rather, just a set of (I'm stealing this from whoever else has used it) First Principles. Facts that tell us something without the luxury of conscious intellect and thereby "desire" getting in the way... nothing to -interpret- the data, it's just immediate knowledge. The same way an animal just KNOWS not to eat the blue flower cause it's poison to them. Instincts. They really are a separate faction of knowledge, as far as I can tell. And they are far more important than these moral theorists want to admit.
So you have Instinct, which is not processed by the Agent, and Desire, which is processed by the Agent.

I'd venture that they're both expressions of the primal Pleasure-Pain Binary of Preference, though. And as noted above, we don't "think" about Instinct. But we think about Desire. When the spontaneous knowledge of Instinct enters our consciousness, we still have to process it, even if only for a moment, before we apply it to our decision-making. So Desire takes precedence over Instinct, even if the latter has a significant role in all this shebab.

I wonder if, in this line of thinking, that Instinct becomes Desire when it enters the Conscious. So I'm walking down the street, as per my Desire of wanting to get somewhere. As a hooded assailant surges from the shadows, my Instinct tells me this assailant represents a potential threat to my wellbeing, and processing this Instinct via the Conscious causes my Desire to be revised from heading along my prior path to fleeing the other way.

Unless it's the case that my Desire to not be harmed supersedes my Desire to make it all the way down the street.

Seems I'll be having a lot more fun tinkering with definitions and correlations. *party horn*
 
Top Bottom