Cardinal Classic Discussion

Old college threads.
Locked
User avatar
Mike Bentley
Sin
Posts: 6134
Joined: Fri Mar 31, 2006 11:03 pm
Location: Bellevue, WA
Contact:

Cardinal Classic Discussion

Post by Mike Bentley »

Is it okay to talk about Cardinal Classic? I don't believe that there are any other mirrors going on, but I could be mistaken.
Mike Bentley
VP of Editing, Partnership for Academic Competition Excellence
Adviser, Quizbowl Team at University of Washington
University of Maryland, Class of 2008
Schweizerkas
Lulu
Posts: 83
Joined: Sat May 12, 2007 1:01 am
Location: Stanford, CA

Post by Schweizerkas »

There are no mirrors left, so go ahead and discuss.
User avatar
Skepticism and Animal Feed
Auron
Posts: 3208
Joined: Sat Oct 30, 2004 11:47 pm
Location: Arlington, VA

Post by Skepticism and Animal Feed »

I would like to complain about this lead-in:
In his best known role, this man’s greatest act was to select Gideon Welles as Secretary of the Navy.
I immediately negged with Abraham Lincoln upon hearing this. I suspect I was not alone in doing so. In retrospect, Lincoln probably did greater acts than appointing Welles, but Welles was one of the greatest Navy secretaries (if not THE greatest) in American history, and this is a view that somebody might legitimately have. (Or, at the very least, I could conceive of a historian -- especially a naval history buff -- making this claim in a book out of either adoration of the navy or a desire to stand out). Plus, if your moderator was not speaking clearly, it's possible to miss the "greatest". There was almost certainly a better way to phrase this -- something like "he encouraged one President to appoint Gideon Welles". Or better yet, don't put somebody as famous as Gideon Welles in the lead-in and instead refer to Welles by one of his many famous acts. Something like "He encouraged one President to appoint the Navy Secretary who gave John Ericsson a contract".
Bruce
Harvard '10 / UChicago '07 / Roycemore School '04
ACF Member emeritus
My guide to using Wikipedia as a question source
User avatar
cvdwightw
Auron
Posts: 3446
Joined: Tue May 13, 2003 12:46 am
Location: Southern CA
Contact:

Post by cvdwightw »

Yeah, I thought our packet was probably the weakest of the set, which is weird since we got that packet to the editors in mid-December. Not having the original packet currently with me, I can't say that that particular clue was in the original question, but I'm pretty sure that the original lead-in was the flogging in the navy clue. Same thing for the Thor tossup -- I know I didn't write that first clue, and I'm pretty sure more people buzzed on the clue about goats than the Aurvandil's Toe clue. The transparent Minneapolis tossup was indeed our fault, though.

Outside of a few confusing questions (e.g. lac operon) and inappropriate/transparent lead-ins (e.g. "this guy was a Treasury Secretary who didn't like the Second Bank of the U.S. and since this is the lead-in he's known for other stuff"), this was a pretty good packet set.
User avatar
Sima Guang Hater
Auron
Posts: 1880
Joined: Mon Feb 05, 2007 1:43 pm
Location: Nashville, TN

Post by Sima Guang Hater »

Generally, the questions were of very high quality, and there were several standout tossups (sublime, ode on a favorite cat etc., gresham's law, malevich, Ngugi, Giacometti, etc.) that I enjoyed; in general, the answer selection struck a great balance between interesting, accessible, and rewarding to the top players present. Very few tossups went dead in our rounds from what I remember, but judging from the MIT stats, that's not universal.

However, the set had a few overarching issues:

1) Some misplaced clues:
-"Blue Cliff Record" in the first line of a koans tossup
-Lugh in the second line of a Cuchulainn tossup
-Transacetylase and permase in the first line of a lac operon tossup (I don't think this tossup can be written)
-Something about gunrunning in the first line of a Rimbaud tossup
-Ringhorn and Bredablik in the first clue of a Baldr tossup (this one's not very egregious, because there are so many tossups on this guy that every clue about him will be known by someone).
-Overland Monthly in a very early clue in the Bret Harte tossup.
-Lieber's neuropathy in the first line of a mitochondrial DNA tossup (I'm not sure this can be written well either).

2) Some fluff clues seemed to sneak in. For example, the Mombasa tossup mentioned that Zheng He may have visited it. Considering that guy's been fucking everywhere, its about as useful as saying "Charles Meigs once nailed a hooker in this city". This is especially bad in a tournament where many of the tossups went 9 lines long (not in itself a bad thing, but if you want to write a 9 line tossup, the clues had better be goddamn dense and goddamn interesting). In general, however, the early clues were interesting, difficult, and buzzable with knowledge, which is good.

3) There were some questions that had the "what the hell" factor, like Rembrandt's wife, those mats of cyanobacteria, some film which was the only film of some really obscure artist (I think it was called Mechanical Ballet), Xenu (Weiner's Law #1), characters from Howell's "A Modern Instance", near-impossible Graham Greene novels, and maybe a few others. Also no one fucking cares about figure skating.

4) Length. Far be it for me to complain about 9 line tossups on Basil II, but some of the bonus parts were excessively long. 2 lines per part maximum please; I feel that that's better for the pace of the game and for letting everyone out just a little earlier.

I definitely enjoyed myself; you guys wrote a great set, ran a smooth tournament, and you couldn't ask for a much better field. It was also fun to run into all of these west coast Quizbowlers who I've only ever briefly seen at nationals or talked to on the internet. It was definitely worth the insane trip/ordeal that Dennis and I went through.
Eric Mukherjee, MD PhD
Brown University, 2009
Perelman School of Medicine at the University of Pennsylvania, 2018
Medicine Intern, Yale-Waterbury, 2018-9
Dermatology Resident, Vanderbilt University Medical Center, 2019-

Writer, NAQT, NHBB, IQBT, ACF, PACE
User avatar
tkpatel
Lulu
Posts: 75
Joined: Wed Dec 26, 2007 8:51 pm
Location: UCLA

Post by tkpatel »

I can't say that that particular clue was in the original question, but I'm pretty sure that the original lead-in was the flogging in the navy clue.
I wrote that question on Hannibal Hamlin and as Dwight said, the sec navy was not my original lead-in. The editors probably added it in.
User avatar
pray for elves
Auron
Posts: 1050
Joined: Thu Aug 24, 2006 5:58 pm
Location: 20001

Post by pray for elves »

I played at the MIT mirror, so I heard a different set of packets/different order than players at the actual CC, but a few things irritated me:

1) The music distribution seemed incredibly opera-heavy. It felt like 2/3 of the music questions were on opera.
2) Certain categories seemed to be consistently pushed to the end, leading to an unbalance in tossup/bonus ratio. For instance, at one point in the day I think I'd heard two trash tossups and seven trash bonuses, despite many tossups going dead and us not playing all the bonuses as a result. (Note: I'm not trying to advocate more or less trash at academic tournaments; I'm just using it as an example where the tossup/bonus ratio was out of whack.)
3) No tournament director at a mirror should ever be in the position of not having all of the packets until after the tournament has started. This didn't actually affect me, since I was playing, but this is a course of action that needs to be avoided at all costs.

I basically agree with what Eric said about a little too much fluff and a few clunkers of leadins. I will say that the question set was far too hard for the field at MIT, although it certainly appears to have been much more appropriate for the Stanford location, where the difficulty was needed to differentiate between the great teams that played there.
Susan
Forums Staff: Administrator
Posts: 1829
Joined: Fri Aug 15, 2003 12:43 am

Post by Susan »

Eric wrote: 3) There were some questions that had the "what the hell" factor, like Rembrandt's wife, those mats of cyanobacteria, some film which was the only film of some really obscure artist (I think it was called Mechanical Ballet), Xenu (Weiner's Law #1), characters from Howell's "A Modern Instance", near-impossible Graham Greene novels, and maybe a few others. Also no one fucking cares about figure skating.
See, I really liked the stromatolites tossup (at least, I liked that it came up; I don't remember the tossup itself all that well). It's nice to see questions on (oft-neglected) natural history at all, and it's even better to see ones that aren't on geologic periods (not that there's anything wrong with such questions, but I haven't seen many natural history questions that aren't on geologic periods). Within that answer space, I think stromatolites are on the harder side of reasonableness; you're very likely to come across them in an introductory evolutionary/natural history/biodiversity course. Also, I'm pretty sure they've come up before, though the Stanford Archive isn't supporting my claim.
vandyhawk
Tidus
Posts: 584
Joined: Sat Dec 13, 2003 3:42 am
Location: Seattle

Post by vandyhawk »

DeisEvan wrote: I will say that the question set was far too hard for the field at MIT, although it certainly appears to have been much more appropriate for the Stanford location, where the difficulty was needed to differentiate between the great teams that played there.
I haven't seen any of the questions, but I think your reasoning is a bit off here. It's not so much the difficulty that was needed, but rather well-written tossups and bonuses with good separation between easy, medium, and hard parts. If you took the answers from ACF Nats '05 or something but made the tossups a bunch of fluff until the giveaway, you'd have just as hard a time distinguishing among the best teams as if you used some :chip: questions. Ok, so maybe a little easier, but still not good. Similarly, if something like ACF Fall questions were used at the Stanford site, things would have perhaps been a bit more jumbled, but you wouldn't have seen Illinois B taking out Maryland or Yaphe or anything like that.
User avatar
pray for elves
Auron
Posts: 1050
Joined: Thu Aug 24, 2006 5:58 pm
Location: 20001

Post by pray for elves »

vandyhawk wrote:
DeisEvan wrote: I will say that the question set was far too hard for the field at MIT, although it certainly appears to have been much more appropriate for the Stanford location, where the difficulty was needed to differentiate between the great teams that played there.
I haven't seen any of the questions, but I think your reasoning is a bit off here. It's not so much the difficulty that was needed, but rather well-written tossups and bonuses with good separation between easy, medium, and hard parts. If you took the answers from ACF Nats '05 or something but made the tossups a bunch of fluff until the giveaway, you'd have just as hard a time distinguishing among the best teams as if you used some :chip: questions. Ok, so maybe a little easier, but still not good. Similarly, if something like ACF Fall questions were used at the Stanford site, things would have perhaps been a bit more jumbled, but you wouldn't have seen Illinois B taking out Maryland or Yaphe or anything like that.
I guess what would be more appropriate to say is that the difficulty was more in demand by the field full of top teams at Stanford than by the field at MIT, as opposed to the answer selection making the difference. This also brings up the point that Matt makes about how dead tossups are the enemy of good quizbowl. How many tossups can go dead in a round between two middle teams in a tournament's standings before the answer selection is officially too hard?
User avatar
DumbJaques
Forums Staff: Administrator
Posts: 3084
Joined: Wed Apr 21, 2004 6:21 pm
Location: Columbus, OH

Post by DumbJaques »

Some fluff clues seemed to sneak in. For example, the Mombasa tossup mentioned that Zheng He may have visited it. Considering that guy's been fucking everywhere, its about as useful as saying "Charles Meigs once nailed a hooker in this city". This is especially bad in a tournament where many of the tossups went 9 lines long (not in itself a bad thing, but if you want to write a 9 line tossup, the clues had better be goddamn dense and goddamn interesting). In general, however, the early clues were interesting, difficult, and buzzable with knowledge, which is good.
I have more thoughts on the set for late, but for now I will definitely second this. I'm only highlighting this because it's a clear example of wiki-abuse. A quick search of Mombasa shows this fucking random line of text "Zheng He may have visited Mombasa in 1415" or some shit like that on wikipedia. As far as I know Zheng He did not visit Mombasa (though I think he killed a bunch of animals in Malindi, which is like Mombasa in that they are both M-initialed Kenyan ports filled with black people, something that a Ming admiral might not be able to differentiate beyond, but an editor should probably be able to handle). Even if he had visited Mombasa, it's a useless clue. As it is I think it's worse than a useless clue because it's neg bait for one of the well-known cities that Zheng indisputably visited. And it probably cut into the Chinese history distribution. Bastards!

Also, didn't a bonus say "Charles Meigs once nailed an [underage] hooker in this city?" That's what I knew the answer off of. . .
Chris Ray
OSU
University of Chicago, 2016
University of Maryland, 2014
ACF, PACE
User avatar
Sima Guang Hater
Auron
Posts: 1880
Joined: Mon Feb 05, 2007 1:43 pm
Location: Nashville, TN

Post by Sima Guang Hater »

myamphigory wrote:
Eric wrote: 3) There were some questions that had the "what the hell" factor, like Rembrandt's wife, those mats of cyanobacteria, some film which was the only film of some really obscure artist (I think it was called Mechanical Ballet), Xenu (Weiner's Law #1), characters from Howell's "A Modern Instance", near-impossible Graham Greene novels, and maybe a few others. Also no one fucking cares about figure skating.
See, I really liked the stromatolites tossup (at least, I liked that it came up; I don't remember the tossup itself all that well). It's nice to see questions on (oft-neglected) natural history at all, and it's even better to see ones that aren't on geologic periods (not that there's anything wrong with such questions, but I haven't seen many natural history questions that aren't on geologic periods). Within that answer space, I think stromatolites are on the harder side of reasonableness; you're very likely to come across them in an introductory evolutionary/natural history/biodiversity course. Also, I'm pretty sure they've come up before, though the Stanford Archive isn't supporting my claim.
Hmm, I guess you've got me there. I haven't had the benefit of taking evolutionary biology yet. It was certainly interesting to hear about, but I doubt that it was converted very well (I have you and Jeff Hoppes getting it, and I believe Selene did not. Admittedly a small sample). Perhaps a bonus part would have been better?
DumbJaques wrote:stuff about meigs nailing underage hookers
That was in one of our bonuses, I believe.
Eric Mukherjee, MD PhD
Brown University, 2009
Perelman School of Medicine at the University of Pennsylvania, 2018
Medicine Intern, Yale-Waterbury, 2018-9
Dermatology Resident, Vanderbilt University Medical Center, 2019-

Writer, NAQT, NHBB, IQBT, ACF, PACE
vandyhawk
Tidus
Posts: 584
Joined: Sat Dec 13, 2003 3:42 am
Location: Seattle

Post by vandyhawk »

DeisEvan wrote:I guess what would be more appropriate to say is that the difficulty was more in demand by the field full of top teams at Stanford than by the field at MIT, as opposed to the answer selection making the difference.
Sounds reasonable.
DeisEvan wrote: This also brings up the point that Matt makes about how dead tossups are the enemy of good quizbowl. How many tossups can go dead in a round between two middle teams in a tournament's standings before the answer selection is officially too hard?
Hmm, 6-7? I'm not really sure, but definitely < 10. Of course, you also have to take into account whether one team negged but would've gotten it, whether people had heard of the answer but not yet learned it well enough, etc.
User avatar
Not That Kind of Christian!!
Yuna
Posts: 847
Joined: Mon Feb 26, 2007 10:36 pm
Location: Manhattan

Post by Not That Kind of Christian!! »

I second the opera-heavy assertion... There were also some dubious choices as to what answers should have been prompted or were unpromptable (I don't remember specifically, but there were several instances in which words like "act" or "treaty" were required, despite the fact that these words were implicit in the clue selection.). That being said, it was lots of fun. Thanks for the tournament.
Hannah Kirsch
Brandeis University 2010
NYU School of Medicine 2014

"Wow, those Scandinavians completely thorbjorned my hard-earned political capital."
User avatar
cvdwightw
Auron
Posts: 3446
Joined: Tue May 13, 2003 12:46 am
Location: Southern CA
Contact:

Post by cvdwightw »

ToStrikeInfinitely wrote:It was certainly interesting to hear about, but I doubt that it was converted very well (I have you and Jeff Hoppes getting it, and I believe Selene did not. Admittedly a small sample). Perhaps a bonus part would have been better?
From what I recall, Ray Anderson negged somewhere near the end of the question with something like "strombolites", indicating that he had heard of the answer but got confused on their actual name. Illinois A did not pick up the rebound.
vandyhawk wrote:
DeisEvan wrote:I guess what would be more appropriate to say is that the difficulty was more in demand by the field full of top teams at Stanford than by the field at MIT, as opposed to the answer selection making the difference.
Sounds reasonable.
Is probably true. For instance, in our packet, realistically most of the tossups should have been gettable or at least fraudable by a mid-level team at a "standard difficulty" tournament. It was statistically the hardest of the tournament in terms of tossup points per tossup, and yet the game between an undermanned Berkeley D2 team and a Chicago freshman team still saw 14 tossups get answered. However, we did try to ramp our bonuses up slightly more difficult upon disclosure that this was going to feature several of the best teams in the nation.

If you're talking about the MIT packets being too hard, that's probably because they were asked to write playoff packets, and indeed that's where most of the "what-the-heck-is-this-talking-about" questions that I can remember came from.
User avatar
pray for elves
Auron
Posts: 1050
Joined: Thu Aug 24, 2006 5:58 pm
Location: 20001

Post by pray for elves »

cvdwightw wrote:If you're talking about the MIT packets being too hard, that's probably because they were asked to write playoff packets, and indeed that's where most of the "what-the-heck-is-this-talking-about" questions that I can remember came from.
All of the MIT people who wrote (also Dartmouth guys) were commenting on how their questions were made to be much harder, so it has little to do with MIT's writing.
User avatar
Mike Bentley
Sin
Posts: 6134
Joined: Fri Mar 31, 2006 11:03 pm
Location: Bellevue, WA
Contact:

Post by Mike Bentley »

First off, this was a good set that exceeded my expectations. Thanks to the editors for putting hard work into making it end up that way.

I was a bit disappointed with the distribution of questions in the tournament. I would have liked to see a little more trash in this tournament (many packets did not seem to have any), although it's not the biggest deal ever.

Computer Science in this tournament was horribly underrepresented, which was a shame. The only questions we heard were a reasonable enough tossup on Huffman Coding and the completely ridiculous tossup on that type of learning that I can't even remember the answer of. There were no computer science bonuses that came up in the tournament.

From my experience in editing TIT, where we received generally at least 1/0 or 0/1 computer science per packet, it seems unlikely to me that no other computer science questions were submitted. It's even more unlikely as I submitted a tossup on hash functions that was probably better than the tossup on elliptical galaxies that made it into the packet.

Also, the packet where we played Brown (I believe it was one of the MIT packets) was really whacky in terms of difficulty. Some bonuses didn't even come close to having an easy part (the really hard Aztec myth bonus or the difficult Russian history bonus, for example), while others (although they escape me right now) certainly had at least very easy 20s. Bonus difficulty is obviously one of the hardest things to get right, but even in a finals packet it's a good idea to have at least some semblance of easy, medium and hard parts.

Oh, and for all the teams that find it necessary to write indie music bonuses for academic tournaments: please find legitimate easy parts. Something Corporate is not an easy part to a bonus, just as Of Montreal was not an easy part to a different indie music bonus at Penn. It seems like this happens at many tournaments, where indie music bonuses are written such that it will be 30 or 0, and it would be nice if this practice was ended.
Mike Bentley
VP of Editing, Partnership for Academic Competition Excellence
Adviser, Quizbowl Team at University of Washington
University of Maryland, Class of 2008
User avatar
pray for elves
Auron
Posts: 1050
Joined: Thu Aug 24, 2006 5:58 pm
Location: 20001

Post by pray for elves »

Bentley Like Beckham wrote:completely ridiculous tossup on that type of learning that I can't even remember the answer of.
I wholeheartedly agree. I just finished a class on machine learning last semester and I've never heard of that answer (which I've also forgotten).
User avatar
grapesmoker
Sin
Posts: 6368
Joined: Sat Oct 25, 2003 5:23 pm
Location: NYC
Contact:

Post by grapesmoker »

I thought this set was very enjoyable; it wasn't as polished as Terrapin, but on the whole it was still very good. There were a couple of clunkers as people have already mentioned, but they did not distract from the generally high quality. It didn't hurt that many of the teams writing for this tournament were very good, but I think the editors did a good job with what I thought might have initially been weak packets. Brian and his teammates did a great job, and if Cardinal Classic has a similar field next year, I'll definitely do my best to come out.
Jerry Vinokurov
ex-LJHS, ex-Berkeley, ex-Brown, sorta-ex-CMU
code ape, loud voice, general nuissance
User avatar
theMoMA
Forums Staff: Administrator
Posts: 5796
Joined: Mon Oct 23, 2006 2:00 am

Post by theMoMA »

I'll echo what Jerry is saying. Before certain matches, I worried that the packets might be underwhelming because they didn't have known good writers as their source material. However, the editors did a great job bringing these packets up to speed.

I did think that the MIT packets in general could have used some better screening for clunker tossups and bonus evenness, because it seemed that many of the weaker questions and unbalanced bonuses were in those packets.
User avatar
Mike Bentley
Sin
Posts: 6134
Joined: Fri Mar 31, 2006 11:03 pm
Location: Bellevue, WA
Contact:

Post by Mike Bentley »

Oh, also, what was the reason that this tournament started so late? 10:45 is really far too late to start a tournament, especially one going to 13 rounds. If you were not able to reserve the building before 10 AM, there should have at least been a better effort to get everything organized and ready to go prompty at 10, rather than the long down time between teams showing up and the tournament directors getting things moving.
Mike Bentley
VP of Editing, Partnership for Academic Competition Excellence
Adviser, Quizbowl Team at University of Washington
University of Maryland, Class of 2008
User avatar
cvdwightw
Auron
Posts: 3446
Joined: Tue May 13, 2003 12:46 am
Location: Southern CA
Contact:

Post by cvdwightw »

Bentley Like Beckham wrote:Computer Science in this tournament was horribly underrepresented, which was a shame. The only questions we heard were a reasonable enough tossup on Huffman Coding and the completely ridiculous tossup on that type of learning that I can't even remember the answer of. There were no computer science bonuses that came up in the tournament.
Ray Luo was visibly upset after negging about halfway through the question with "bootstrapping", which was apparently a reasonable guess since the question contained something like "FTP name this type of bootstrapping", and after the answer was read he protested that it should have been converted to a tossup on bootstrapping since people would have at least had a chance on that.

I wrote a bonus in which the answers were garbage collection/(generational and incremental, five points each)/lost object problem. Having stolen a copy of the packet, I can attest that this was relegated to bonus 20.
User avatar
cvdwightw
Auron
Posts: 3446
Joined: Tue May 13, 2003 12:46 am
Location: Southern CA
Contact:

Post by cvdwightw »

Bentley Like Beckham wrote:Oh, also, what was the reason that this tournament started so late? 10:45 is really far too late to start a tournament, especially one going to 13 rounds. If you were not able to reserve the building before 10 AM, there should have at least been a better effort to get everything organized and ready to go prompty at 10, rather than the long down time between teams showing up and the tournament directors getting things moving.
I'm pretty sure that due to unexpected traffic delays resulting in a missed flight, Illinois was still in transit at the stated start time, so it would have been started much closer to 10 were it not for that confusion.
User avatar
Eärendil
Wakka
Posts: 138
Joined: Fri Feb 08, 2008 8:56 am
Location: Queenstown, New Zealand

Post by Eärendil »

The Cardinal Classic packet set has been submitted to the Stanford Archive and should be online shortly.
User avatar
Important Bird Area
Forums Staff: Administrator
Posts: 5670
Joined: Thu Aug 28, 2003 3:33 pm
Location: San Francisco Bay Area
Contact:

Post by Important Bird Area »

Bentley Like Beckham wrote: Some bonuses didn't even come close to having an easy part (the really hard Aztec myth bonus... for example.
Which bonus was this? The only one I remember hearing seemed perfectly reasonable.
Jeff Hoppes
President, Northern California Quiz Bowl Alliance
former HSQB Chief Admin (2012-13)
VP for Communication and history subject editor, NAQT
Editor emeritus, ACF

"I wish to make some kind of joke about Jeff's love of birds, but I always fear he'll turn them on me Hitchcock-style." -Fred
Schweizerkas
Lulu
Posts: 83
Joined: Sat May 12, 2007 1:01 am
Location: Stanford, CA

Post by Schweizerkas »

First of all, I'd like to publicly apologize to MIT for getting them the packets so late. Never having edited a tournament before, I severely underestimated the time it takes to compile packets. I'm really sorry for the frustration it must have caused you.

The frantic compiling of the packets also contributed to a few of the clunkers that I hadn't intended to be in the final set.

I'll address a few of the other comments:
1. Some excessively long rambling tossups. Sorry about that. I really did intend for the max length to be 7-7.5 lines. One of our editors has a tendency to write really long tossups, and I must have told him 3 or 4 times to cut down the length, but they still ended up being too long. If I had had an extra day to edit, I would have definitely trimmed some tossups.

2. Criticism of specific questions. Okay, I mainly just edited Physics, Philosophy, Social Science, and visual arts (painting, sculpture, architecture), which haven't seemed to provoke much criticism. I can't comment much on the specific clues and bonus parts in the other categories. As for Eric's 2 criticisms of hard parts in art bonuses:
a. By "Rembrandt's wife", I assume you mean the part on Reubens' wife, Helene Fourment. This doesn't seem ridiculous for a hard part to me. She wasn't just some random wife of his, she was a model for his paintings. I see her name come up twice on the Stanford Archive as a clue for Reubens, and 3 more times in various ACF packets.
b. Mechanical Ballet. Um, yeah, that part was pretty ridiculous. Sorry. The originally submitted question was even harder, with parts (Leger, Tubism, Mechanical Ballet). I dumbed it down to (Braque, Leger, Mechanical Ballet), but it was still probably too hard. I didn't really know what to do with that bonus. Although, I'm not sure it's correct to call Leger "really obscure." He's come up quite a bit before, although he's still fairly hard for a medium part of a bonus.

3.
All of the MIT people who wrote (also Dartmouth guys) were commenting on how their questions were made to be much harder, so it has little to do with MIT's writing.
This comment surprises me. I'd have to do a close comparison to be sure, but I don't believe that we made those packets systematically more difficult. Certainly the AdaBoost tossup was written by them, not by us, and as mentioned before, I made their Leger bonus easier. Just skimming through a few of the MIT packets, I see that I changed a tossup on "Hawking evaporation" to the easier answer of "Hawking radiation," and changed a bonus on (alienation, commodity fetishism, primitive accumulation"), to (alienation, Marx, primitive accumulation). I know there were a number of really hard bonuses submitted in the MIT packets. Certainly some of their bonuses I made harder, but I don't believe it was systematic in one direction.

4. Lack of CS. Sorry about that. I think I can find 7 CS questions that were submitted in total. It looks like 3 of them were thrown in after the first 20 questions, and 2 CS bonuses were the 20th bonus exactly. I think some of this was just an accident, although I also did put some of the CS deliberately at the end, because our editing team was weak in the CS area, so I just wasn't as confident of the quality of the questions as other science categories. Sorry, I probably should have tried to get some outside editing help for the CS.

5.
figure skating
Believe it or not, not one, but two teams submitted figure skating bonuses. I'll make sure that in next year's announcement, I mention a $15 penalty for each skating bonus submitted.

6. The 10 am planned start was indeed because that was the earliest we could get rooms. And as Dwight said, most of the time was spent waiting for Illinois, who had missed their Friday flight. I don't recall exactly how much time elapsed between Illinois' arrival and the start of the tournament, but I didn't think we wasted that much time.
User avatar
ezubaric
Rikku
Posts: 371
Joined: Mon Feb 09, 2004 8:02 pm
Location: College Park, MD
Contact:

Post by ezubaric »

cvdwightw wrote:Ray Luo was visibly upset after negging about halfway through the question with "bootstrapping", which was apparently a reasonable guess since the question contained something like "FTP name this type of bootstrapping", and after the answer was read he protested that it should have been converted to a tossup on bootstrapping since people would have at least had a chance on that.
I'm very interested now; could someone post the question? Or at least the real answer? Was it AdaBoost or bagging? (Both would probably be too obscure for this or any tournament.)

Edit: Tag mixup
Jordan Boyd-Graber
UMD (College Park, MD), Faculty Advisor 2018-present
UC Boulder, Founder / Faculty Advisor 2014-2017
UMD (College Park, MD), Faculty Advisor 2010-2014
Princeton, Player 2004-2009
Caltech (Pasadena, CA), Player / President 2000-2004
Ark Math & Science (Hot Springs, AR), Player 1998-2000
Monticello High School, Player 1997-1998

Human-Computer Question Answering:
http://qanta.org/
Schweizerkas
Lulu
Posts: 83
Joined: Sat May 12, 2007 1:01 am
Location: Stanford, CA

Post by Schweizerkas »

MIT Packet 3 wrote:For any set of weak-classifiers, this process is guaranteed to converge to a solution that classifies at least as well as the best classifier. Developed by Yoav Freund and Robert Schapire, this meta-algorithm involves repeatedly reclassifying the training data with the best of the set of original classifiers while reweighting the training data so that the samples classified incorrectly in the previous round are given half the total weight. In the end, each classifier chosen as a top classifier is given a vote based on how well it performed, and the aggregate classifier is applied to the equally-weighted data set. FTP name this machine learning process, thought mistakenly to be named for Lady Lovelace but actually named for its adaptive nature.
ANSWER: AdaBoost [accept Adaptive Boosting]
User avatar
ezubaric
Rikku
Posts: 371
Joined: Mon Feb 09, 2004 8:02 pm
Location: College Park, MD
Contact:

Post by ezubaric »

Schweizerkas wrote:For any set of weak-classifiers, this process is guaranteed to converge to a solution that classifies at least as well as the best classifier. Developed by Yoav Freund and Robert Schapire, this meta-algorithm involves repeatedly reclassifying the training data with the best of the set of original classifiers while reweighting the training data so that the samples classified incorrectly in the previous round are given half the total weight. In the end, each classifier chosen as a top classifier is given a vote based on how well it performed, and the aggregate classifier is applied to the equally-weighted data set. FTP name this machine learning process, thought mistakenly to be named for Lady Lovelace but actually named for its adaptive nature.
ANSWER: AdaBoost [accept Adaptive Boosting]
Wow. This answer is pretty hardcore. It gets used a lot in the real world, and given that SVMs have come up as third parts to bonuses, I think the topic is fair game. However, being a tossup is probably going a little far. That said, "perceptron" has also been a tossup answer, and in the world of AI, that's probably about the same level of importance these days.

I also think that putting Schapire that early in the question is a bad move, as 80% of people who know what boosting is will buzz there. Also, I don't think it's true in general that wrong examples get half the total weight on each iteration (but I haven't done the math, and would love to be corrected). The first sentence also doesn't help much, as that's true of a good number of algorithms (especially if you interpret that to mean regret in the case of online algorithms).

I think it might have been better to start out with some of Warmuth's recent work on showing its relationships to margin based methods, discussing the requirements of the weak learners, or explicitly stating the objective function (which isn't too complicated and is a concrete, unique clue that experts should recognize). Plus, had I been playing on this packet, I probably would have buzzed with boosting after "Robert," which should certainly be prompted.

Regardless, I'm happy to see the topic of machine learning advance in quiz bowl!
Jordan Boyd-Graber
UMD (College Park, MD), Faculty Advisor 2018-present
UC Boulder, Founder / Faculty Advisor 2014-2017
UMD (College Park, MD), Faculty Advisor 2010-2014
Princeton, Player 2004-2009
Caltech (Pasadena, CA), Player / President 2000-2004
Ark Math & Science (Hot Springs, AR), Player 1998-2000
Monticello High School, Player 1997-1998

Human-Computer Question Answering:
http://qanta.org/
msaifutaa
Lulu
Posts: 37
Joined: Tue Feb 12, 2008 6:40 pm

Post by msaifutaa »

ezubaric wrote:
Schweizerkas wrote:For any set of weak-classifiers, this process is guaranteed to converge to a solution that classifies at least as well as the best classifier. Developed by Yoav Freund and Robert Schapire, this meta-algorithm involves repeatedly reclassifying the training data with the best of the set of original classifiers while reweighting the training data so that the samples classified incorrectly in the previous round are given half the total weight. In the end, each classifier chosen as a top classifier is given a vote based on how well it performed, and the aggregate classifier is applied to the equally-weighted data set. FTP name this machine learning process, thought mistakenly to be named for Lady Lovelace but actually named for its adaptive nature.
ANSWER: AdaBoost [accept Adaptive Boosting]
Wow. This answer is pretty hardcore. It gets used a lot in the real world, and given that SVMs have come up as third parts to bonuses, I think the topic is fair game. However, being a tossup is probably going a little far. That said, "perceptron" has also been a tossup answer, and in the world of AI, that's probably about the same level of importance these days.

I also think that putting Schapire that early in the question is a bad move, as 80% of people who know what boosting is will buzz there. Also, I don't think it's true in general that wrong examples get half the total weight on each iteration (but I haven't done the math, and would love to be corrected). The first sentence also doesn't help much, as that's true of a good number of algorithms (especially if you interpret that to mean regret in the case of online algorithms).

I think it might have been better to start out with some of Warmuth's recent work on showing its relationships to margin based methods, discussing the requirements of the weak learners, or explicitly stating the objective function (which isn't too complicated and is a concrete, unique clue that experts should recognize). Plus, had I been playing on this packet, I probably would have buzzed with boosting after "Robert," which should certainly be prompted.

Regardless, I'm happy to see the topic of machine learning advance in quiz bowl!
Hi Jordan,

I agree with most of your thoughts above. I assure you that in AdaBoost, the samples you screwed up last time will get half the weight. The math does not initially look like that until you play with it a bit, but it makes it much easier to deal with if you do it that way, especially if you are boosting by hand.

AdaBoost is covered in MIT's Intro to AI class (usually taken by sophomores and thus taken by both of our team's current sophomores who take CS classes as well as by me several years ago). Furthermore, as you mentioned, it is indeed major in actual practice. I had seen all the answers you mentioned as well--Perceptrons (which we cut out of that same class due to relative irrelevance and lack of time, though we still covered related neural net topics), SVMs, etc. There are many questions in other sciences that definitely go deeper (or at least as deep) into their subfield of Chemistry or Biology, for instance, than a sophomore level intro class.

As for the placement of Schapire, I suppose it's a matter of how you learn it. As a freshman interested in AI, I had heard of it and would have guessed it only from the giveaway. When I took that class as a sophomore, Schapire's name was mentioned once and the procedure was strongly emphasised. I would have buzzed on the procedure, the lines just after Schapire, having forgotten the name. Now that I have taken more classes and TAed the class as well, I would have buzzed on Schapire. If you're seeing Boosting for the first time in the context of doing research in this particular subfield (and thus reading Schapire, frex, at which point the name would be unmistakable), then I could totally see how you would find the name a bigger clue. But then, I think in general science toss-ups in are usually powerable by people who research in their subfield (I could be wrong though--this is just what I seem to see from watching my teammates, etc).

Anyways, I'm glad you enjoy the general field of discourse--I agree that it is sorely underrepresented (certainly the reason why simple and prevalent things in the field are still relatively obscure or even canon expansion in Quizbowl while other sciences delve deeper). On to other comments!


As to the general lack of Computer Science, I did write 1/1 of it for MIT's 4 packets (thus making 2 of the 7 questions I guess). The other was the following bonus that you can find in one of the rounds as like Bonus 21 or 22 or something (it wasn't numbered):
Bonus 21 or 22 or something wrote:He wanted to contact the ghost of his lover using ESP. FTPE
[10] Name this estrogen-pumped luminary of early computer science, who thus considered ESP to be a valid counterargument to his namesake test.
ANSWER: Alan Turing
[10] Turing first created that test in this work of his, in which he outlined and refuted several counterarguments to a thinking machine.
ANSWER: "Computing machinery and intelligence"
[10] Turing rather weak refutation of the use of ESP in breaking the Turing Test involved the use of this protection.
ANSWER: ESP-proof box [accept obvious equivalents like booth/room/etc, but not something like 'The Juggernaut's helmet from the X-Men]
As for the editing, from what I saw, the editing job was generally well done, particularly considering how down-to-the-wire it was. While some posters mentioned being surprised at the packet quality given the field of packet writers, I was actually surprised at the quality of the editing considering how unintrusive the editing usually was. Or at least, I'm used to going to an ACF tournament to see that there are many questions with the same answers as one of the ones we wrote, but not a single word was preserved. Since I tend to like the wording of our questions / hearing our questions, it made me happy that the editors usually found the right things to remove or change to improve the quality of the question while keeping much the same. In fact, 90% of the changes to my questions wound up being to remove repeats I later found in other team's packets, something the editors did a good job with as far as I remember.

Of course, sometimes changes to the leadins introduced a few minor hitches (the goats into the Thor tossup was apparently a change? That wasn't ours, but I do remember it, and also the Hamlin one).

As for the difficulty, I was only intimately familiar with the questions I wrote myself, and I can say that none was made easier and a non-negligible number were made harder. That's okay with me, though--it seems like it played well in California, which was the primary field. It's just that over here, it would sometimes lead to a 0ed bonus or a 10ed bonus with a relatively good team who aced the 10 and then had not a clue about the rest, which one of the players in my room dubbed the 'Easy, Impossible, Impossible' bonus. I didn't ask them the missing parts I had written if the answer was changed (in case it was a repeat with something else), so I can't be positive conversions would have been higher, but I think they would have. Then again, it might have led to unduly high conversions over at the real CC, so I totally understand why they did it--I only say this because I am definitely one of the people Evan mentioned as having noted that the editors made some of my questions harder, so I wanted to clarify.

Hmm, let's see--I think someone also mentioned the Aztec bonus. I can only say that I thought the bonus in one of the other packets that started with Tlaloc was harder (I would have 10ed that bonus. On mine, after the editing to make Mictlantecuhtli harder by removing references to the cognate Mictlan, I would have only 20ed it, though 30ed the original). Of course, saying one was harder does not mean the other isn't also hard. I agree that mine was still hard, but I think not above the level of difficulty of the rest of the tournament. Before I sent my packet, I asked a non-myth expert on my team if she thought it was too hard (because I thought it was my hardest bonus), and she indicated that she would have only known Tezcatlipoca for 10, which she thought was about right given the fact that she isn't a myth expert and the tournament difficulty level was supposed to be old school Regionalsish. That had been my hunch on difficulty as well (old Tez seeing play not only as one of the top 4 or 5 most common Aztec deities asked but also in the overubiquitous 'jaguar' tossup pretty near to the bottom, if you let those get that far).

Any other questions and comments are welcome--I have to admit, I mostly registered on these boards just to say that AdaBoost does give half weight to the ones you got wrong, but I wanted to make my post more constructive vis-a-vis the tournament than just a drive-by comment like "Nope, they do get half weight".
User avatar
Captain Sinico
Auron
Posts: 2865
Joined: Sun Sep 21, 2003 1:46 pm
Location: Champaign, Illinois

Post by Captain Sinico »

Hi,
Did anyone actually get the AdaBoost question? It went dead in every game situation that I've heard of so far. My own opinion is that AdaBoost is too obscure and I wish you wouldn't write tossups on it again.

MaS
Mike Sorice
Coach, Centennial High School of Champaign, IL (2014-2020) & Team Illinois (2016-2018)
Alumnus, Illinois ABT (2000-2002; 2003-2009) & Fenwick Scholastic Bowl (1999-2000)
Member, ACF (Emeritus), IHSSBCA, & PACE
User avatar
pray for elves
Auron
Posts: 1050
Joined: Thu Aug 24, 2006 5:58 pm
Location: 20001

Post by pray for elves »

It went dead in every room I heard about at the MIT mirror. Also, AdaBoost was not covered in Brandeis's Intro to AI class, which I have taken, nor in any of the other CS classes I've had here.
msaifutaa
Lulu
Posts: 37
Joined: Tue Feb 12, 2008 6:40 pm

Post by msaifutaa »

ImmaculateDeception wrote:Hi,
Did anyone actually get the AdaBoost question? It went dead in every game situation that I've heard of so far. My own opinion is that AdaBoost is too obscure and I wish you wouldn't write tossups on it again.

MaS
Ironically, due to the discussion here, I think that the topic has gained additional cognizance to the point where many more people would convert it now. I respect your opinion, but I still think that there are answers in toss-ups for the other sciences that are significantly deeper and broader into their respective fields than AdaBoost is. I will readily admit, however, that I misjudged how many people would actually just know it from their non-Quizbowl knowledge. I had assumed our intro AI course typical, so when I first wrote the question, I thought it was going to be answered in far more rooms than it was.

But I still think that Adaboost deserves to be in Quizbowl canon. I guess it makes some sense that the 'Other Science' topics would in general have less depth than the main three, particularly areas like Earth Sciences where few people major in that area compared to Chemistry and Biology, so we have fewer obvious experts, but with CS, we actually do have a pretty good number of people who major in the field compared to the rest of 'Other Science', so I think we should expect some depth and breadth.

Additionally, Computer Science is a moving field. Adaboost isn't *that* new as CS goes (1996), but compared to Perceptron learning (1957), for instance, it is both newer and more prevalent in the current research climate.

Anyway, I would certainly still consider it to be a fair game for the medium difficulty part of a Machine Learning bonus (perhaps Neural Nets, Adaboost, SVMs). And if someone else wants to write a toss-up about it, I'll certainly support their decision. As for me, obviously I won't personally write a toss-up on the same topic any time soon lest I become too predictable.
User avatar
grapesmoker
Sin
Posts: 6368
Joined: Sat Oct 25, 2003 5:23 pm
Location: NYC
Contact:

Post by grapesmoker »

The number of computer science types in quizbowl is still relatively small, which is why CS hasn't had as much time to become more in depth as physics, for example. Not only that, but you've also got a bunch of CS people saying that this question is too hard and they've never heard of it, including noted CS advocate in quizbowl Mike Bentley. Your class is probably not typical, as I'm sure you've already surmised (incidentally, what makes you think everyone in CS takes a machine learning course?). The topic you chose to write a tossup on was very, very difficult and obscure not only for non-CS players but also for specialists; I think this would have made a good third part of a machine learning bonus, but it was way too hard for a tossup. I recommend that you visit the "best of the best" forum and read Andrew Hart's excellent post on expanding the canon to get an idea for how to properly introduce such topics into quizbowl.

In conclusion, AdaBoost is the Gunn-Peterson trough of computer science.
Jerry Vinokurov
ex-LJHS, ex-Berkeley, ex-Brown, sorta-ex-CMU
code ape, loud voice, general nuissance
msaifutaa
Lulu
Posts: 37
Joined: Tue Feb 12, 2008 6:40 pm

Post by msaifutaa »

grapesmoker wrote:(incidentally, what makes you think everyone in CS takes a machine learning course?).
It's not a machine learning course--it's an Intro AI course, and a requirement to graduate for those on the CS side of EECS at MIT. If the (admittedly incorrect) assumption that it was typical was correct, then it would have been no stretch to assume that people should know it. I'm actually still pretty surprised that so many people don't know it just from the context of coming home on vacation and talking to my three high school buddies (none of them Quizbowl guys, just CS guys). One went to Stanford, one to UMBC and then UMD, one UMD and then JHU, and all of them knew what it was. Still, anecdotal evidence is, of course, not helpful in generalising the case.

Anyway, that doesn't really matter. Because there are a smaller number of CS tossups, as we both agree, it's possible that legit answers that most people in CS would know don't come up just because of infrequency of CS questions in tournaments. That wasn't the case here, of course, but I bet I can come up with a reasonable number of examples if I had a way to do a search throughout the Stanford and ACF archives without wasting a lot of time. IMO, CS people should be trying to find these, if for no other reason than to encourage legitimate CS knowledge over frauding due to the overuse of the same few answers. Adaboost clearly isn't one of them, but I know they're out there.
User avatar
Mike Bentley
Sin
Posts: 6134
Joined: Fri Mar 31, 2006 11:03 pm
Location: Bellevue, WA
Contact:

Post by Mike Bentley »

msaifutaa wrote:
grapesmoker wrote:(incidentally, what makes you think everyone in CS takes a machine learning course?).
It's not a machine learning course--it's an Intro AI course, and a requirement to graduate for those on the CS side of EECS at MIT. If the (admittedly incorrect) assumption that it was typical was correct, then it would have been no stretch to assume that people should know it. I'm actually still pretty surprised that so many people don't know it just from the context of coming home on vacation and talking to my three high school buddies (none of them Quizbowl guys, just CS guys). One went to Stanford, one to UMBC and then UMD, one UMD and then JHU, and all of them knew what it was. Still, anecdotal evidence is, of course, not helpful in generalising the case.

Anyway, that doesn't really matter. Because there are a smaller number of CS tossups, as we both agree, it's possible that legit answers that most people in CS would know don't come up just because of infrequency of CS questions in tournaments. That wasn't the case here, of course, but I bet I can come up with a reasonable number of examples if I had a way to do a search throughout the Stanford and ACF archives without wasting a lot of time. IMO, CS people should be trying to find these, if for no other reason than to encourage legitimate CS knowledge over frauding due to the overuse of the same few answers. Adaboost clearly isn't one of them, but I know they're out there.
It may be taught at Maryland, but I wouldn't know since AI isn't a required course here.

Anyways, relatively advanced topics from upper level courses are typically not the greatest things to introduce into the canon at non-national events. I'm all for expanding the CS canon, but I personally don't think it's all that limiting as it is considering how much space is given. And tournaments like this are not really the place to do it, especially in tossup form.
Mike Bentley
VP of Editing, Partnership for Academic Competition Excellence
Adviser, Quizbowl Team at University of Washington
University of Maryland, Class of 2008
msaifutaa
Lulu
Posts: 37
Joined: Tue Feb 12, 2008 6:40 pm

Post by msaifutaa »

Bentley Like Beckham wrote: It may be taught at Maryland, but I wouldn't know since AI isn't a required course here.

Anyways, relatively advanced topics from upper level courses are typically not the greatest things to introduce into the canon at non-national events. I'm all for expanding the CS canon, but I personally don't think it's all that limiting as it is considering how much space is given. And tournaments like this are not really the place to do it, especially in tossup form.
Fair enough. Ironically, it is not an upper level class here, so if you asked me at the right time, I would have known Adaboost and been flummoxed on Huffman Coding. Anyway, I've done research at Maryland, and you guys have some pretty great AI there. If you ever have an elective slot, I recommend looking into it.
User avatar
Sima Guang Hater
Auron
Posts: 1880
Joined: Mon Feb 05, 2007 1:43 pm
Location: Nashville, TN

Post by Sima Guang Hater »

holy crap its seifter.

Mark, I'm all for introducing things like AdaBoosting into the canon (as it really does sound very interesting), but it should probably be relegated to the third part of a bonus, simply for the reason that it hasn't come up much before.

You mention that things learned in sophomore/junior biology and chemistry classes come up in quizbowl; this is true, but most of the time they come up as leadins to much easier questions, so people can still get the answers at the end. This set contained tossups on mitochondria, necrosis, ATP, and the aorta, all of which people learn about in high school biology. The bonuses, on the other hand, contained things like Thiamine pyrophosphate and Acute Lymphocytic Leukemia, which is much higher level stuff.
Eric Mukherjee, MD PhD
Brown University, 2009
Perelman School of Medicine at the University of Pennsylvania, 2018
Medicine Intern, Yale-Waterbury, 2018-9
Dermatology Resident, Vanderbilt University Medical Center, 2019-

Writer, NAQT, NHBB, IQBT, ACF, PACE
jollyjew
Lulu
Posts: 13
Joined: Mon Feb 19, 2007 2:51 am
Location: Chicago, formerly Northfield

Post by jollyjew »

I'm curious about the comment about indie rock questions from way the hell higher up in the thread. First, I didn't write either of the questions mentioned, nor any other indie rock question, and am not invested in the existence of indie rock questions to any significant degree, so this isn't defensive or some crap. I'm just not sure what potential part of such a bonus would be easier than Something Corporate or Of Montreal. Bands that get relatively mainstream (relative in the sense that being mainstream in any way is relative) air play seem ripe for easy parts of bonuses.
User avatar
walter12
Lulu
Posts: 81
Joined: Sun Apr 18, 2004 1:49 pm
Location: Iowa City, IA

Post by walter12 »

I believe the Of Montreal bonus from Penn Bowl took a tangent at Athens, Georgia and concluded with "Elephant Six collective" and "Neutral Milk Hotel". I considered Neutral Milk Hotel to be the easy part, although that might not be true for people who haven't been listening to indie music as long as myself.
Paul Drube, University of Iowa
User avatar
Jeremy Gibbs Lemma
Rikku
Posts: 370
Joined: Sat Apr 02, 2005 6:49 pm
Location: Kirksville, Missouri
Contact:

Post by Jeremy Gibbs Lemma »

When I think Athens, GA, I naturally think of Neutral Milk Hotel but that was the third and "hard" part of the bonus. Giving only "On Avery Island" for the album would be difficult for even some who have listened to NMH because everybody associates them with "In the Aeroplane..." It was a legit third part because that required deep knowledge of the topic. Of Montreal is one of the more well known indie rock bands and thus makes a good first part. I would have included last year's album, "Hissing Fauna..." just because it might be more in people's minds but I don't see why it is unaskable.

Saying that Something Corporate is not an "easy" part is just absurd though. As much mainstream coverage as they have received, they are def. reasonable to ask about. Jack's Mannequin is now becoming almost as popular and would be askable as well. Personally, I don't have a problem at all with indie rock coming up in trash distributions because rap seems to come up a good deal more in most instances.
User avatar
DumbJaques
Forums Staff: Administrator
Posts: 3084
Joined: Wed Apr 21, 2004 6:21 pm
Location: Columbus, OH

Post by DumbJaques »

When I think Athens, GA, I naturally think of Neutral Milk Hotel but that was the third and "hard" part of the bonus. Giving only "On Avery Island" for the album would be difficult for even some who have listened to NMH because everybody associates them with "In the Aeroplane..." It was a legit third part because that required deep knowledge of the topic. Of Montreal is one of the more well known indie rock bands and thus makes a good first part. I would have included last year's album, "Hissing Fauna..." just because it might be more in people's minds but I don't see why it is unaskable.

Saying that Something Corporate is not an "easy" part is just absurd though. As much mainstream coverage as they have received, they are def. reasonable to ask about. Jack's Mannequin is now becoming almost as popular and would be askable as well. Personally, I don't have a problem at all with indie rock coming up in trash distributions because rap seems to come up a good deal more in most instances.
Yeah, but I think most people have heard of the rappers that come up. You seem to be under the impression that indie music is just as widespread and has as many accessible parts as rap or rock or another huge genre. That's wrong, and it's sort of really implied by the whole idea of indie music. If I write a rap bonus, my easy part isn't going to be Jin, who's really popular in the rap subculture (and is "notable" for being the first Asian-American rapper signed by a major label). I could claim that Jin has received "mainstream" coverage (he's been on MTV rap-battling numerous times, is popular on the internet, was in the news for political stuff a little while ago), but it's sort of an immaterial claim when it's clear to me that nowhere near 50% of the field would convert a bonus part on him, making it a terrible answer selection. Sorry, but I think if you write Indie music, as much as you love it, your easy part needs to be something that's more mainstream, because that's what easy parts are supposed to be. It's one of those things that really pisses people off when one team gets handed a huge zero in a close game.
Chris Ray
OSU
University of Chicago, 2016
University of Maryland, 2014
ACF, PACE
User avatar
Captain Sinico
Auron
Posts: 2865
Joined: Sun Sep 21, 2003 1:46 pm
Location: Champaign, Illinois

Post by Captain Sinico »

msaifutaa wrote:Ironically, due to the discussion here, I think that the topic has gained additional cognizance to the point where many more people would convert it now.
No. Just no. Using that to justify asking about things is rampantly unfair to all the people who don't read this board, etc.
msaifutaa wrote:I respect your opinion, but I still think that there are answers in toss-ups for the other sciences that are significantly deeper and broader into their respective fields than AdaBoost is.
I think that's a very poor askability metric. A good one is this: that question (apparently) went dead in 100% of actual game situations in which it was used. That's proof prima faciae that it's too hard, regardless of what you think.
As a more general comment, I think it's a bad idea to try to use your estimation of importance in a field to justify a question and this is doubly true when you're evaluating your own field of study. You have to consider, first of all, whether the people playing your questions actually know what you're going to ask about. Only then should you consider the importance of a topic for a question.
Kent B wrote:...Of Montreal is one of the more well known indie rock bands and thus makes a good first part. ...Saying that Something Corporate is not an "easy" part is just absurd though....
See above. How many people converted those parts? How relatively hard are those parts? I'd say much, much harder than is normal or fair.
Last edited by Captain Sinico on Wed Feb 13, 2008 3:07 pm, edited 1 time in total.
Mike Sorice
Coach, Centennial High School of Champaign, IL (2014-2020) & Team Illinois (2016-2018)
Alumnus, Illinois ABT (2000-2002; 2003-2009) & Fenwick Scholastic Bowl (1999-2000)
Member, ACF (Emeritus), IHSSBCA, & PACE
User avatar
Mike Bentley
Sin
Posts: 6134
Joined: Fri Mar 31, 2006 11:03 pm
Location: Bellevue, WA
Contact:

Post by Mike Bentley »

A quick trip over to Wikipedia tells me that Something Corporate's most popular album peaked at #24 on the charts. That's not exactly widespread popularity. Granted, this shouldn't be the only metric of a question's ease and askability (as Matt argued on the other forum, it's probably a good idea for trash to ask about things that the audience is interested in). However, the two indie music bonuses I mentioned simply did not have parts that were accessible to a large amount of the field (whereas a small portion of the field seems to have very in-depth knowledge on the subject, which presents a problem). Of Montreal and Something Corporate may be getting a [subjectively] lot of coverage, but in my opinion they don't have the wide appeal and notability for enough people playing at the tournament to make them easy parts.
Mike Bentley
VP of Editing, Partnership for Academic Competition Excellence
Adviser, Quizbowl Team at University of Washington
University of Maryland, Class of 2008
User avatar
walter12
Lulu
Posts: 81
Joined: Sun Apr 18, 2004 1:49 pm
Location: Iowa City, IA

Post by walter12 »

Yeah, but I think most people have heard of the rappers that come up. You seem to be under the impression that indie music is just as widespread and has as many accessible parts as rap or rock or another huge genre. That's wrong, and it's sort of really implied by the whole idea of indie music.
I agree. I don't have the numbers in front of me, but I know that the Indie/Emo/Alt Music packet easily turned out to be the most difficult packet at last year's TTGT11 (at the Iowa site where we didn't use the geography subject packet :wink: ). This despite last-minute attempts to tone down the difficulty as well as its inclusion of some undeniably "mainstream" alternative artists such as Radiohead and Smashing Pumpkins.

As for the questions mentioned, I don't think anybody would have a huge problem if Of Montreal or Something Corporate were the middle-part of a bonus whose easy part was something more mainstream. For example, replacing either of the last two parts of the Of Montreal questions with a part on R.E.M.- the band that I'm apt to think of when someone mentions Athens, Georgia.
Paul Drube, University of Iowa
User avatar
theMoMA
Forums Staff: Administrator
Posts: 5796
Joined: Mon Oct 23, 2006 2:00 am

Post by theMoMA »

Here's the thing about that Something Corporate clue...it didn't even give the most famous things about the band! Now I've listened to a few of their songs, and think they're an okay band, I'd like to think that that should be able to get me points on a Something Corporate bonus part. Something like "Drunk Girl" or "I Woke up in a Car," or even giving Andrew McMahon, would have been helpful to casual fans.

Bonuses on more niche things like indie music should be much more lenient with how they bash you over the head with the answer than canonical things. Like, from Penn Bowl...if you're going to write on shoe designers, make sure there's a part that people only cursorily aware of shoe designers can get.
yoda4554
Rikku
Posts: 254
Joined: Thu Aug 11, 2005 8:17 pm

Post by yoda4554 »

theMoMA wrote: Bonuses on more niche things like indie music should be much more lenient with how they bash you over the head with the answer than canonical things. Like, from Penn Bowl...if you're going to write on shoe designers, make sure there's a part that people only cursorily aware of shoe designers can get.
Certainly true in principle, but I know shit about shoes and I could've gotten Manhola Blahnik (or however that's spelled).
Locked