Why does D&D need CR? Why not use ECL all the time?

General questions, debates, and rants about RPGs

Moderator: Moderators

User avatar
ACOS
Knight
Posts: 452
Joined: Thu Apr 03, 2014 4:15 pm

Post by ACOS »

ishy wrote:My bad, I thought we were talking about 4th edition.
In 4th edition experience points are not just for "when you overcome a challenge or otherwise achieve a goal" or when PCs level up, they also have a different function, the XP budget.

In 4th you sum the XP values for everything in an encounter to judge the difficulty of the encounter.
Now whether that works or whether that is a good idea is something you can debate and yes, this is not how 2nd edition does things.
Hmm ... I think we may be talking past each other.
When I griped about "xp budgets", I was indeed talking about 4e (and apparently 5e, too). The difficulty of the encounter seems like it should come from the encounter level (whatever nomenclature happens to be used: CR, EL, whatever). That EL/CR is what should determine how much XP you get - to do it the other way around seems counter-intuitive. And thus dumb.

When I brought up 2e, I was making a bit of an analogy - in 2e, there was no really rubric for judging whether or not a particular encounter was "level appropriate"; so, after a while, you could kinda-sorta get a feel for "level appropriate" based on how many xp the thing was worth. There was never any guidance given by the game in that regard; that was just one of the things you kinda figured out and hoped you guessed right. My point being that xp-value was the only thing you had at your disposal to gauge difficulty (and again, that was very sketchy .... IDK, maybe I was the only one who did that, but it made sense to me at the time).

To invent the term "xp budget", that seems like saying "okay, I've only got *this many* xp to award for this adventure/encounter/whatever - how do I get there". It implies that you are predetermining how much xp PCs are allowed to have for any given instance of play. And that's dumb (with the caveat being that it is reasonable if you are a company that is publishing a series of interconnected adventures; because modules). And in this respect, anything resembling sandbox play doesn't seem to be supported under this paradigm.
Again, dumb. Because I know that 4e (and now 5e) does indeed have a CR/EL # attached to each monster; and that's the # that should be able to be used for determining whether or not something is level-appropriate. XP should be a function of that; but you seem to be suggesting in your post that it is the other way around. And that makes me sad.



Alternatively, I could just be yelling incoherent ramblings at the wind. It's been known to happen from time to time. :ohwell:
Username17
Serious Badass
Posts: 29894
Joined: Fri Mar 07, 2008 7:54 pm

Post by Username17 »

A "standard" encounter is going to be a thing that exists, even if it's not formally declared (as in AD&D). A standard encounter will have some number of monsters of a certain power level and it will be worth some amount of XP. Formally, you could plausibly go from the monsters to the XP value or from the XP value to the monsters. That's really what the "XP Budget" was attempting to do: tell people what the XP of the standard encounter was supposed to be and then let them derive the appropriate monsters from that.

Image

The problems of course are numerous. The most blatant of course is that monster XP values were given in four digit numbers even for medium level creatures and five digit numbers for high level creatures. Adding up columns of four digit numbers is fucking bullshit and I shouldn't have to do it when checking to see if a proposed encounter is tough or weak.

But it's actually much more fundamental than that. The power of team monster does not grow linearly. An encounter with two Orcs is (caveats about short buffs, AoEs, and stealth rules aside) more difficult than two consecutive encounters with one Orc each. The power of an "appropriate" encounter doesn't grow linearly either, and indeed grows faster as the PCs expand in number because the PCs are supposed to be tougher than appropriate encounters. An Orc that is a decent encounter for one PC will be an incredibly weak encounter when there are five such Orcs and five PCs.

So if the XP per monster is fixed, you've basically embraced several encounter design fallacies and no numbers you put in won't be bullshit. 3e's logarithm based XP value chart was weird as hell and designed to support a one-monster-per-encounter paradigm that was a little bit dumb, but it did have the advantage of working at all.

-Username17
User avatar
ACOS
Knight
Posts: 452
Joined: Thu Apr 03, 2014 4:15 pm

Post by ACOS »

FrankTrollman wrote:That's really what the "XP Budget" was attempting to do: tell people what the XP of the standard encounter was supposed to be and then let them derive the appropriate monsters from that.
Ah, now I get it. (sorry ishy - I know you tried)
see what I was talking about with "yelling incoherent ramblings at the wind"?

I'm not exactly sure how it was done in 4e; but seeing how it's supposed to work in 5e seems extremely tedious, and a lot more work than it should have to be. I mean, I see how they came by the methodology; but it also seems like it was made overly complicated for the sake of complication.
So, yet again, we're left with "3.x did it better". :sad:
Last edited by ACOS on Thu Oct 16, 2014 8:33 pm, edited 1 time in total.
User avatar
ACOS
Knight
Posts: 452
Joined: Thu Apr 03, 2014 4:15 pm

Post by ACOS »

To answer the thread title:
They are 2 different concepts that measure 2 different things (and very real things, no less); though they are often conflated.
CR = creature EL.
EL = CR of the totality of the encounter. e.g., (cumulative creature CRs) + (complicating/mitigating factors).
It seems pretty intuitive to me; so I don't really have a problem with it.
"Civilized men are more discourteous than savages because they know they can be impolite without having their skulls split, as a general thing."
- Robert E. Howard
ishy
Duke
Posts: 2404
Joined: Fri Aug 05, 2011 2:59 pm

Post by ishy »

ACOS wrote:To answer the thread title:
They are 2 different concepts that measure 2 different things (and very real things, no less); though they are often conflated.
CR = creature EL.
EL = CR of the totality of the encounter. e.g., (cumulative creature CRs) + (complicating/mitigating factors).
It seems pretty intuitive to me; so I don't really have a problem with it.
Hehe, I made the same mistake at first. The thread title is about ECL (effective character level) not EL (encounter level).
Gary Gygax wrote:The player’s path to role-playing mastery begins with a thorough understanding of the rules of the game
Bigode wrote:I wouldn't normally make that blanket of a suggestion, but you seem to deserve it: scroll through the entire forum, read anything that looks interesting in term of design experience, then come back.
User avatar
ACOS
Knight
Posts: 452
Joined: Thu Apr 03, 2014 4:15 pm

Post by ACOS »

ishy wrote:Hehe, I made the same mistake at first. The thread title is about ECL (effective character level) not EL (encounter level).
Huh, ain't that some shit.

In which case ... I don't care. The way that 3.x defined things seems to make sense. LA and RHD had some shitty implementation; but the basic concepts and the definition of terms doesn't have anything to do with the problems.
User avatar
tussock
Prince
Posts: 2937
Joined: Sat Nov 07, 2009 4:28 am
Location: Online
Contact:

Post by tussock »

Foxwarrior wrote:Okay, the part where you intentionally choose to have Level mean even more contradictory things is confusing, Tussock, but aside from that, I think what you're saying is that DMs don't get the freedom to build their own encounters, and are restricted to only using the random group rolls provided in the book?
1) 4e tried making all the levels mean the same thing, and it is awful in part for that very reason. They have level 5 traps with the same numbers as level 3 traps but worth 50% more XP. They have level 18 minions with 1 hp (but +4 AC!) worth the same XP as level 10 monsters with 96 hp and lower level things with even more, and none of that worked.

2) DMs can pick critters and the number thereof rather than roll them up whenever they feel it appropriate, having defaults can't stop that. But the game should have a default, where tough places have tough groups of monsters. Players can then choose to go to the Pits of Hell (9) or the Nightmare Swamps (6) or the sewers under the starting town (1) and have that mean something.

So if your party is having a hard time with the Bleak Fens (3), the DM doesn't feel obliged to push you onto higher level swamp monster encounters at any time if you never actually go to the Nightmare Swamps (6). Or maybe there's a teleport trap, whatever, at least the default is players can chose their own difficulty. Like with old megadungeons. It's fun, because players do push it all on their own.

Players who go easy places can just roflstomp the encounters nice a quick and get on with doing more of them in a day. The treasures should likewise match the place, you want the better stuff you go look where the hard fights are.


Frank Trollman wrote:The power of team monster does not grow linearly.
There was articles on wizards.com about the 3e playtest of the CR system (until a few years ago, memory hole). During the playtest period, they used "Challenge Rating" that was different for each different sized group rather than for each monster. So a Goblin Gang (4-9) was maybe CR 1, and a Goblin Band (10-100 + sergents and leader) was maybe CR 5, Warband (10-24 + wargs) maybe CR 8, and the tribe CR whatever, but playtested, so maybe they got it close. No actual numbers were revealed, just a theoretical example there.

It's why the numbers of monsters are split up and named like that in 3e. The CR line was going to have the same number of entries right below it. I'd love to get my hands on the playtest docs for 3e, must have been a tight NDA.

Encounter Level was a last-minute patch to try to do something about adding monster groups of different types together, or improving individuals, or adding classes, because their system for that hadn't tested well (guess what, still doesn't). They ripped out the playtested CR per group size system, ran an eyeball regression on the numbers, and gave us the CR per monster - EL per group system instead.

Which is even more bullshit, but we're stuck with it now.
PC, SJW, anti-fascist, not being a dick, or working on it, he/him.
User avatar
OgreBattle
King
Posts: 6820
Joined: Sat Sep 03, 2011 9:33 am

Post by OgreBattle »

Lago PARANOIA wrote:This is changing the subject slightly, but was 3E D&D's basic assumption that characters doubled in power every two levels sound? It sort of holds up well for the first eight levels, even with fighter classes, but things got wobbly pretty fast
So what's the math behind doubling in power every two levels? To keep it simple let's use melee orcs as an example

Lvl1 orc brute
11hp
AC 13
Melee attack bonus +4
Greataxe 1d12+3 damage

So fighting a mirror match it's 55% accuracy dealing 9 damage a hit

What would his stats be if he was a level 3 orcs, thus 'twice as strong' and an even match for two level 1 orcs? What would a level 5 orc's stats be to be an even match slugging it out with four level 1 orcs?
User avatar
mean_liar
Duke
Posts: 2187
Joined: Fri Mar 07, 2008 7:54 pm
Location: Boston

Post by mean_liar »

I don't buy that Team Monster's capabilities grow linearly, unless Team Monster is playing a different game than the PCs. I get why that idea has traction, but I don't buy it.

Most, if not all of the judgment there is context-specific. How does the game handle AoEs, how does the game handle swarm tactics, what is the encounter distance, how easy is it to isolate characters, what is the outcome of a character being dropped, how common is true death.

Fundamentally, it isn't that 1 enemy for 1 PC is a slight margin of victory and 6 enemies is a compounding of that margin to be 6x (or greater). An encounter (well, non-climatic encounters) isn't about margins of victory, it's about using resources, testing the characters, letting them feel like heroes... mechanically though we can just zero in on using resources. Adding enemies requires using more resources, almost certainly in a linear fashion. Not only that but adding more enemies makes the situation more complex, leading to larger standard deviations, greater likelihoods of one character going down even if victory is more assured. For that character things feel more challenging than the simpler scenario.

Just take an extension on the necessity of maintaining a slight margin of victory regardless of group size: with 8 PCs, that slight margin of victory is going to see what, two or three PCs dropped? With a group of 4, that slight margin of victory might see one PC dropped. How does the game handle that, within in the context of keeping the story moving and mechanically? Sure, a slight margin of victory is more dramatic but I don't know if it's the best target for encounter design with larger groups.

Regardless I think there isn't much of a difference between a per-PC or per-group encounter benchmark unless there's some large scaling going on (game assume groups are 4 PCs, you're trying to balance for 7). I think the larger lesson is that games should have a warning label on playing at tables with more PCs than the game assumes should constitute a party.
Last edited by mean_liar on Sun Oct 19, 2014 1:31 pm, edited 1 time in total.
Username17
Serious Badass
Posts: 29894
Joined: Fri Mar 07, 2008 7:54 pm

Post by Username17 »

mean liar wrote:I don't buy that Team Monster's capabilities grow linearly, unless Team Monster is playing a different game than the PCs. I get why that idea has traction, but I don't buy it.
I don't care if you buy that. That isn't a point anyone is claiming is true. Monster capabilities don't grow linearly. Team Monster grows quadratically with extra monsters. XP Budgets, like those in 4e D&D tend to assume that team monster grows linearly, but they are obviously full of shit on that point. 5e has a weird XP Budget math kludge to try to represent the fact that two monsters are more than twice as tough as one monster, but it's really clunky and doesn't work well. Like everything in 5e, it's kind of the worst of all possible worlds - lacking the relative simplicity of 4e while still giving outputs that are stupid.
mean liar wrote:Fundamentally, it isn't that 1 enemy for 1 PC is a slight margin of victory and 6 enemies is a compounding of that margin to be 6x (or greater). An encounter (well, non-climatic encounters) isn't about margins of victory, it's about using resources, testing the characters, letting them feel like heroes... mechanically though we can just zero in on using resources. Adding enemies requires using more resources, almost certainly in a linear fashion. Not only that but adding more enemies makes the situation more complex, leading to larger standard deviations, greater likelihoods of one character going down even if victory is more assured. For that character things feel more challenging than the simpler scenario.

Just take an extension on the necessity of maintaining a slight margin of victory regardless of group size: with 8 PCs, that slight margin of victory is going to see what, two or three PCs dropped? With a group of 4, that slight margin of victory might see one PC dropped. How does the game handle that, within in the context of keeping the story moving and mechanically? Sure, a slight margin of victory is more dramatic but I don't know if it's the best target for encounter design with larger groups.
There are a lot of words here, but I don't think the sum total is any less confused than your intro where you bravely stated that you don't buy a proposition that no one here embraced.

Basically when you scale up the PCs and face more monsters to compensate, you are looking at margins of victory. Specifically you're looking at iterative probability. You have some victory chance that you think is acceptable for a PC facing an appropriate challenge. Let's say it's 90%. Well, if you just scale up both sides, team monster no longer has a 10% chance of victory, now they have a 0.003% chance of victory. That's really low, and well outside the range you just said an appropriate encounter was supposed to fall in.

As an aside, this is one of the big reasons that we Same Game Test at the difficulty level where the game says that we should only win half the time. Because 50/50 actually does scale up and down to any number of players and monsters and stay 50/50. It makes the math way simpler. A single player character facing a threat they should beat 50% of the time should scale up to four characters facing four threats that they should each beat 50% of the time and beat them 50% of the time. But if the chances we 60% or 40%, then the act of scaling it up would cause it to diverge (to the limit of infinite characters who each had a 60% chance of winning against each of infinity monsters winning the entire battle ~100% of the time).

-Username17
User avatar
Smeelbo
Apprentice
Posts: 86
Joined: Sun Feb 28, 2010 12:44 am

Sum of Squares

Post by Smeelbo »

Back in the days before 3.0, when we were playing the Warlock (CalTech/MIT) version of D&D, we used the sum of squares to estimate appropriate encounter sizes. That is, the sum of the squares of the levels of the monsters was approximately equal to the sum of the squares of the player character levels. While this did not account for the non-linear advantage of a larger group size (in my experience, the 5th and 6th characters should count roughly as two characters each), it did seem to give good-ish results.

Back in those days, monster level was the number of hit dice, with more magical creatures having smaller hit die size.

So for example, a party of 5 characters, of levels 2, 4, 5, 5, and 6 had a "bounce factor" of 4 + 16 + 25 + 25 + 36 = 106. Opposing some mostly 2nd level bug bears with a few higher level officers, that group might face 3 x 4th level officers (3 x 16 = 48), a 3rd level shaman (9) and 25 x 2nd level grunt bug bears.

Admittedly, Warlock differed significantly from OD&D, in that casters had to wait 5-6 rounds between casting spells, and melee types made an average of 3 attacks per round, so melees contributed a lot more than 3.5, but in my experience, something like sum of squares, accounting for group size, gives better results.

Smeelbo
User avatar
mean_liar
Duke
Posts: 2187
Joined: Fri Mar 07, 2008 7:54 pm
Location: Boston

Post by mean_liar »

Yeah, my argument is that margins of victory isn't necessarily a good metric for what makes a good encounter, or at least shouldn't be the primary metric. The primary metric should be how much resources the encounter consumes, not the odds of a failure - because the odds of a failure are always going to be a function of resources previously expended, and only is static from a fresh state. You can roughly gauge resource expenditures per combat, but odds of failure you cannot as that's primarily a function of how many encounters have already occurred, among other things.

There's still also the issue that more complex battles will also have swingier results within the party itself. 10% party fail at a table of 6 is going to have a higher body count and be more jarring to the game's progress, and a pyrrhic victory more likely than 10% party fail at a table of 4.

Now that's all more interesting than per-PC/per-party for baselining encounters since I think the difference between the two is marginal, but from the perspective of baseline encounter generation I can't imagine a non-linear function for scaling between party sizes as being easier to implement than a linear per-PC function based on expected resource use.

I do agree that scaling up from "average" to "hard" ought not be a simple linear function, but you've already expressed a strong disdain for adding, so I'm curious what you think an appropriate scaling function would look like. Right now your system is a non-linear party size adjustment followed by a non-linear difficulty adjustment. Considering average party level as an input, that's a three-dimensional function where only one variable is obvious. Per-PC allows at least one of those dimensions to be linear for baselining, leaving only one (difficulty above/below baseline) as non-linear.

What did you have in mind?
User avatar
brized
Journeyman
Posts: 141
Joined: Sun Jun 17, 2012 9:45 pm

Post by brized »

mean_liar wrote:The primary metric should be how much resources the encounter consumes, not the odds of a failure - because the odds of a failure are always going to be a function of resources previously expended, and only is static from a fresh state.
A few problems with this:

1) In a system dependent upon resource depletion as a balancing and difficulty factor, how do you address the 5-minute workday as a player tactic?

2) The amount of resources a party has from a fresh state is not static if you have classes with differing amounts of consumable resources, like casters vs. non-casters in 3.X, and wealth you can convert into variable resources, like wands vs. per-day or constant effect items.
Tumbling Down wrote:
deaddmwalking wrote:I'm really tempted to stat up a 'Shadzar' for my game, now.
An admirable sentiment but someone beat you to it.
User avatar
mean_liar
Duke
Posts: 2187
Joined: Fri Mar 07, 2008 7:54 pm
Location: Boston

Post by mean_liar »

Those are indeed going to change the discussion. A single encounter system that can support a party that uses 5min workdays and a party that doesn't have that luxury is probably going to be incoherent, or complex. I would imagine to have a coherent system you'd either have to accept and posit a system by which there's either one or the other, and work from there.

Certainly resource depletion as a balancing factor in a game with 5min workdays is irrelevant. I don't particularly like games like that, since it implies that your enemies are also doing 5min workdays and rocket launcher tag/ambushes define most conflicts, but yes I totally agree that if 5min workdays are a thing then resource depletion evaporates as a concern, or at least degrades significantly.

Differing consumables is tricky, and system dependent. Wands of CLW in DnD, for example, obviate HP loss as a long-term resource loss but keeps relevancy as a short-term resource loss. Characters who are on all the time - the proverbial DnD Fighter - are playing a different game than characters who aren't, and unfortunately there's no universal way to adjudicate this since it reflects back on the 5min workday problem in that when there are large disparities between character functions, baselining gets very context-dependent. Wealth and wealth-by-level guidelines exist for a reason... but the other things you point out are just as troubling for "failure" as for "resources" as a metric. Characters with shitty/anomalously good gear will throw off your baseline.

What you ideally want is a game that gives everyone the same basic mechanics: you don't want always-on-yet-lesser characters and shining-bright-but-then-disappearing characters, you want always-on characters with a few shining bright tricks. You want to decouple wealth from immediate personal might. You want some consistency between archetypes. Basically you can't baseline if your game doesn't support a baseline, and that goes for "failure" and "resources" and any other metric.

I do think that if the only resource in play is spells/day in a 5min workday environment, then "failure" is a better measure. That assumes that in the game, wounds heal after combat, time isn't much of a concern, abilities don't degrade with use... that's DnD, for sure. Maybe only at levels 9+ when Teleport comes online (plus or minus), but that is DnD at 9th level or sooner.

Contrast that with Shadowrun, where scenes take much longer, there's no teleporting, healing within the context of a mission can be problematic, even Stun damage accumulates, there's some heavy lines on role protection such that if the Rigger/Hacker/Mage/Stealthy-dude gets hurt then they're seriously compromised and the mission along with it. In that case, damage is a resource that isn't easily obviated by the setting, 5min workdays aren't available, and resource management is more important.
Username17
Serious Badass
Posts: 29894
Joined: Fri Mar 07, 2008 7:54 pm

Post by Username17 »

I don't know why you think trying to change the subject to resource expenditure would change anything. The logic of resource expenditure is exactly the same as the logic of margins of victory.

Imagine for the moment the crazy scenario where there are two characters whose expected resource depletions are different. One of them we will call a "Wizard" and we expect them to overcome a Monster by expending a charge of magic but very few turns. The other character we will call a "Fighter" and we expect him to overcome a Monster by grinding away on it for many turns during which the resource loss is in the form of actions taken by the Monster such as hit point depletion. Crazy, right? Now let's put these two characters in with two Monsters. The Wizard uses his resources as advertised, but then during the remaining rounds of the combat he can support the Fighter (even if just by plinking with a crossbow), and the Fighter's expected hit point loss is now less.

Heck, let's even put two Fighters against two monsters. Imagine that it takes either X turns or 2X turns for a Fighter to grind away a monster, subjecting him to an average depletion of 1.5X Monster attacks in a fight with a Monster. But now we have two Fighters and two Monsters, so one of them is finished with the Monster in X turns and the other would be finished in 2X turns, but after X turns it turns into two Fighters versus one Monster and the second Monster only stays up for 1.5X turns or less. So while the resource depletion of One Fighter versus One Monster averages 1.5X Monster Attack Units, the expected resource depletion of Two Fighters versus Two Monsters is south of 1.25X Monster Attack Units.

It's simply mathematically true that if you expect Team Player to win against Team Monster that if you multiply the numbers on both sides by the same amount that the bulge Team Player is going to be proportionately larger. You are literally arguing against simple algebra and you're mathematically wrong.

On the addition of squares thing: squaring, or indeed multiplying or dividing levels by any number only changes anything if you are using mixed level groups. 2+2 == 2+2 and 4+4 == 4+4. It only changes things in that 2+2 == 1+3 but 4+4 =/= 1+9. In modern games, mixed level PC parties are much rarer, but mixed level Monster groups are still normal.

D&D has historically kept pretty close to quadratic power gains for level increases in most editions for many classes or monster types. But 3rd edition, for example, claims to be going for exponential power gain and pretty mch achieves that for the casters and a lot of the monsters. Even Rogues are, if not playing with mysterious Pathfinder nerfs, able to keep up with the exponential power curve if they spend their equipment budget very circumspectly.

-Username17
Lago PARANOIA
Invincible Overlord
Posts: 10555
Joined: Thu Sep 25, 2008 3:00 am

Post by Lago PARANOIA »

OgreBattle wrote:So what's the math behind doubling in power every two levels?
It's not really an explicit formula so much as the convergence of several independent factors. Just looking at warrior types, we can see that they get feats, BAB, magical item upgrades, stat increases, new class features, extra attacks, hit points, saves, soforth at staggered but stochastically even rates. So even a shit-tier class like a fighter has, going from level 5 to 7, acquired doodads like a mithril full-plate, an extra attack, probably some kind of +2 stat booster, and two extra feats among other things. With how combat in d20 operates, I'd call that a doubling of power.

It's pretty fuzzy and requires some assumptions (specifically feat and magic item selection), but I think it holds up pretty well until around level 9 or so, which is about when the many contributing factors to non-spellcaster competence stat not keeping up. Not coincidentally, level 9 is about when 'supernatural abilities and tactics or GTFO' obstacles start showing up in full force.
Josh Kablack wrote:Your freedom to make rulings up on the fly is in direct conflict with my freedom to interact with an internally consistent narrative. Your freedom to run/play a game without needing to understand a complex rule system is in direct conflict with my freedom to play a character whose abilities and flaws function as I intended within that ruleset. Your freedom to add and change rules in the middle of the game is in direct conflict with my ability to understand that rules system before I decided whether or not to join your game.

In short, your entire post is dismissive of not merely my intelligence, but my agency. And I don't mean agency as a player within one of your games, I mean my agency as a person. You do not want me to be informed when I make the fundamental decisions of deciding whether to join your game or buying your rules system.
ishy
Duke
Posts: 2404
Joined: Fri Aug 05, 2011 2:59 pm

Post by ishy »

FrankTrollman wrote:So while the resource depletion of One Fighter versus One Monster averages 1.5X Monster Attack Units, the expected resource depletion of Two Fighters versus Two Monsters is south of 1.25X Monster Attack Units.

-Username17
The problem though, is that fighter A might have enough hitpoints to withstand 1 * 1.5X Monster Attack Units but not enough hitpoints for 2 * 1.25X Monster Attack Units (if the monsters attack the same target).
Gary Gygax wrote:The player’s path to role-playing mastery begins with a thorough understanding of the rules of the game
Bigode wrote:I wouldn't normally make that blanket of a suggestion, but you seem to deserve it: scroll through the entire forum, read anything that looks interesting in term of design experience, then come back.
Username17
Serious Badass
Posts: 29894
Joined: Fri Mar 07, 2008 7:54 pm

Post by Username17 »

ishy wrote:
FrankTrollman wrote:So while the resource depletion of One Fighter versus One Monster averages 1.5X Monster Attack Units, the expected resource depletion of Two Fighters versus Two Monsters is south of 1.25X Monster Attack Units.

-Username17
The problem though, is that fighter A might have enough hitpoints to withstand 1 * 1.5X Monster Attack Units but not enough hitpoints for 2 * 1.25X Monster Attack Units (if the monsters attack the same target).
Who gives a shit? We're talking about total resource depletion. If one character takes all the damage and the other character takes none of the damage, the total use of healing potion at the end of the battle is the same. No matter how the monsters and the players distribute their damage, the total healing potion required to clean up the monster damage is simply less if there are more fighters and more monsters. And that's with no intra-party synergy at all. The fact that the players have an expected margin of victory other than zero means that the battle becomes more one-sided as it scales up. That's just math.

-Username17
ishy
Duke
Posts: 2404
Joined: Fri Aug 05, 2011 2:59 pm

Post by ishy »

Because you can't fix death with a healing potion.
Gary Gygax wrote:The player’s path to role-playing mastery begins with a thorough understanding of the rules of the game
Bigode wrote:I wouldn't normally make that blanket of a suggestion, but you seem to deserve it: scroll through the entire forum, read anything that looks interesting in term of design experience, then come back.
Lago PARANOIA
Invincible Overlord
Posts: 10555
Joined: Thu Sep 25, 2008 3:00 am

Post by Lago PARANOIA »

I think ishy is counting hp to zero (i.e. death) as a kind of catastrophic failure where it ends up spiking resource depletion above and beyond that of using healing potion to recover hit points. If it takes two rounds to kill an orc but three attacks for an orc to kill a party member, a party of six PCs against six orcs will lose more resources under this interpretation, assuming the six orcs can focus fire, than one PC against one orc.

EDIT: Of course, an astute reader might have noticed that these new assumptions create a whole new set of unsolved assumptions. In games which have a large cushion of health from incapacitation to death this only starts to hold in which monster spite is more important than monster victory. It also assumes that death is a greater source of resource depletion than health, which starts to not be the case at a certain level in D&D/Pathfinder and furthermore gets fuzzy when we're talking about replacement characters. It also assumes that the monsters can focus fire as they see fit and that Team PC is an undifferentiated brick of defense and health for each member.
Last edited by Lago PARANOIA on Mon Oct 20, 2014 10:01 am, edited 2 times in total.
Josh Kablack wrote:Your freedom to make rulings up on the fly is in direct conflict with my freedom to interact with an internally consistent narrative. Your freedom to run/play a game without needing to understand a complex rule system is in direct conflict with my freedom to play a character whose abilities and flaws function as I intended within that ruleset. Your freedom to add and change rules in the middle of the game is in direct conflict with my ability to understand that rules system before I decided whether or not to join your game.

In short, your entire post is dismissive of not merely my intelligence, but my agency. And I don't mean agency as a player within one of your games, I mean my agency as a person. You do not want me to be informed when I make the fundamental decisions of deciding whether to join your game or buying your rules system.
User avatar
mean_liar
Duke
Posts: 2187
Joined: Fri Mar 07, 2008 7:54 pm
Location: Boston

Post by mean_liar »

FrankTrollman wrote:I don't know why you think trying to change the subject to resource expenditure would change anything.

...

Heck, let's even put two Fighters against two monsters. Imagine that it takes either X turns or 2X turns for a Fighter to grind away a monster, subjecting him to an average depletion of 1.5X Monster attacks in a fight with a Monster. But now we have two Fighters and two Monsters, so one of them is finished with the Monster in X turns and the other would be finished in 2X turns, but after X turns it turns into two Fighters versus one Monster and the second Monster only stays up for 1.5X turns or less. So while the resource depletion of One Fighter versus One Monster averages 1.5X Monster Attack Units, the expected resource depletion of Two Fighters versus Two Monsters is south of 1.25X Monster Attack Units.
The reason is that sometimes that battle is both Fighters finish their opponents in X turns (depletion X), and sometimes both finish in 2X+ turns (depletion 3X+), and the aggregate effect is that you're back to an expected outcome in the neighborhood of 1.5X resource depletion... upholding resource depletion and encounter benchmarking as a per-PC metric.

I don't know why you insist on median results rather than mean results, or in the effect that pushing a larger group to the same margin of total success/failure is more likely to result in lots of dead PCs. Based on your response that death is a matter of more healing potions and largely a speedbump, I assume you're ignoring death because you don't consider it a big deal, which is a pretty odd position to take in the abstract.
It's simply mathematically true that if you expect Team Player to win against Team Monster that if you multiply the numbers on both sides by the same amount that the bulge Team Player is going to be proportionately larger. You are literally arguing against simple algebra and you're mathematically wrong.
You're oversimplifying my argument (insert snarky Frank response here). We're not talking about margins of victory - I already agree with you regarding margins of victory accumulating (go take your victory lap for stating the obvious and then another for repeating what I already agreed with). What happens within that margin of victory gets swingier as the encounter size increases. You seem willing to accept that there will be more PC deaths per combat in order to keep a static margin of victory, but you don't engage with that meaningfully if you're assuming that death isn't really that big of a deal.

The issue is exactly what Lago is talking about - there are a raft of assumptions to be made, and the fact that you're assuming them without making those assumptions explicit, and then projecting those assumptions on to other people just leads you to inexorably conclude those that have different opinions are idiots.

In DnD land, at sufficient levels, with 5min workdays, easy resurrection, and a slew of other assumptions, you can safely assume that resources aren't a meaningful depletion in the abstract, and that TPK/not-TPK are your only real results from battle with other intermediate states being marginal distractions.

The fact that this is a special case extension of resource depletion in specific conditions where the only resource in question is living itself passes you by.

...

Finally, how do you propose to baseline things? That's a genuine design development question. Running endless combos of battles to determine benchmarks? Just doing signature battles and then extrapolating? From a perspective of per-party being the benchmark, don't you have to also run those benchmark tests from both sides (monster group X vs party group A, X vs B, X vs C, Y vs A, Y vs B...)? That sounds like a shitload of design work. Maybe it has to be?

From there, how do you manage the non-linearity of encounter difficulty and party size in a manner that a GM can engage with? Most games have non-linear power progression and so you presumably have compounding exponentials when dealing with "hard mode"; that implies to me that there's greater risk of miscalculation at higher levels in harder battles, which feels intuitively correct. That's insurmountable and perhaps some kind of powers should be built into the back-end of PCs such that at higher levels, as risk of GM accidentally overpowering the party increases, PCs have more escape hatch/ejection seat powers?
Last edited by mean_liar on Mon Oct 20, 2014 2:46 pm, edited 1 time in total.
Username17
Serious Badass
Posts: 29894
Joined: Fri Mar 07, 2008 7:54 pm

Post by Username17 »

Meanliar wrote:The reason is that sometimes that battle is both Fighters finish their opponents in X turns (depletion X), and sometimes both finish in 2X+ turns (depletion 3X+), and the aggregate effect is that you're back to an expected outcome in the neighborhood of 1.5X resource depletion... upholding resource depletion and encounter benchmarking as a per-PC metric.
I'm going to stop you right there, because no it fucking doesn't.
XFighter 1Fighter 2Total Time
Fight 11X1X1X
Fight 21X2X1.25X
Fight 32X1X1.25X
Fight 42X2X2X
Average1.5X1.5X1.375X

Yes, there is the occasion where both Fighters roll badly and end their respective fights at the same time and never offer any assistance to each other, but it's a rare case, and it doesn't actually change the outcome very much. Indeed, in a more realistic scenario where each Fighter could end the fight in 1.1X rounds or 1.7X rounds, that edge case becomes even edgier (1 out of 100 instead of 1 in 4).

You're just mathematically wrong. Stop being wrong.

-Username17
User avatar
mean_liar
Duke
Posts: 2187
Joined: Fri Mar 07, 2008 7:54 pm
Location: Boston

Post by mean_liar »

You know, Frank, you aren't very good at math if it doesn't serve your presuppositions. You know damn well that there are going to be cases where the fight goes 1 more round beyond 2, at you're at 1.5X resources consumed per fighter. In fact, those are the specific deviations and situations I mention in pretty much every post, the ones where I question if you're ignoring them because, well... charitable reasons, but I guess that was overly optimistic.

Stop lying. It's not cool.

...

If you actually want to talk design, that'd be cool too. I thought this was a design forum. Disingenuous argument forums are much more common. You might like it there better, wherever that is.
Finally, how do you propose to baseline things? That's a genuine design development question. Running endless combos of battles to determine benchmarks? Just doing signature battles and then extrapolating? From a perspective of per-party being the benchmark, don't you have to also run those benchmark tests from both sides (monster group X vs party group A, X vs B, X vs C, Y vs A, Y vs B...)? That sounds like a shitload of design work. Maybe it has to be?

From there, how do you manage the non-linearity of encounter difficulty and party size in a manner that a GM can engage with? Most games have non-linear power progression and so you presumably have compounding exponentials when dealing with "hard mode"; that implies to me that there's greater risk of miscalculation at higher levels in harder battles, which feels intuitively correct. That's insurmountable and perhaps some kind of powers should be built into the back-end of PCs such that at higher levels, as risk of GM accidentally overpowering the party increases, PCs have more escape hatch/ejection seat powers?
User avatar
Chamomile
Prince
Posts: 4632
Joined: Tue May 03, 2011 10:45 am

Post by Chamomile »

Okay, so let's imagine that it takes between X and 3X turns to defeat a monster. A lone Fighter is now looking at an average of 2X turns' worth of HP depletion, and our table is now 9 entries long instead of 4. In a 1X/3X combination the second orc gets focus-fired when he's about 1/3 dead, the remaining 2/3s go about twice as fast, which means his lifespan is reduced by 1/3 and he lasts 2X turns on average, which means 1X+2X=3X divided by 2 = average of 1.5X turns' worth of damage per fighter. On a 2X/3X the second orc gets focus fired when he's 2/3s dead so he's killed 1/6 faster and he lasts 2.5X on average, which means 2X+2.5X=5.5X divided by 2 equals 2.75X. This brings the average resource depletion to 1.88X, which is still lower than the average of one fighter vs. one orc of 2X. So basically your thought experiment has succeeded in making the table longer and not much else.
Username17
Serious Badass
Posts: 29894
Joined: Fri Mar 07, 2008 7:54 pm

Post by Username17 »

Yeah, no matter what parameters you put in, the two Fighters versus two Monsters thing takes the same number of rounds or less. Because our assumption is that each Fighter is going to beat the monster they are up against and take some number of rounds (and therefore counterattacks) to do it. How much less is going to vary depending on what numbers you slot in, and I don't give a shit because the average is always going to be lower because you're averaging some number of numbers that aren't bigger with some number of numbers that are definitely smaller.

The only way that the resource expenditure doesn't change when you escalate the combat to have more player characters and more monsters is if the margin of victory is already zero. If team monster and team player have an equal chance of victory, then doubling the size of the battle leaves it at 50/50. That is, as I said earlier, a very important reason why K and I designed the Same Game Test to operate at a target 50/50 win/loss ratio. Because doing it at any other value would cause the numbers to get all wonky going from 4 expected PCs to only 1.

That is why when you design a system for "normal encounters" it's going to be for an ideal party size. And it's going to require non-linear kludges to output normal encounters for smaller and larger parties. If you double the number of players and monsters, a "normal encounter" has gotten easier. If you halve the number of players and monsters, a "normal encounter" has gotten harder. This is both mathematically and experientially true, and it takes serious head-up-ass to refuse to acknowledge this fact.

-Username17
Post Reply