Red & White Bowl results
Red & White Bowl results
We're not done yet, but enjoy the first four rounds of stats:
http://www4.ncsu.edu/~paking/ncata/2006 ... whitebowl/
Patrick King
EDIT: Changed link for changed directory structure.
http://www4.ncsu.edu/~paking/ncata/2006 ... whitebowl/
Patrick King
EDIT: Changed link for changed directory structure.
Last edited by pakman044 on Sun Mar 19, 2006 1:04 am, edited 1 time in total.
- First Chairman
- Auron
- Posts: 3651
- Joined: Sat Apr 19, 2003 8:21 pm
- Location: Fairfax VA
- Contact:
-
- Wakka
- Posts: 151
- Joined: Sun Feb 27, 2005 12:29 pm
- Location: Saint Mary's School, Raleigh, NC
- radiantradon
- Lulu
- Posts: 41
- Joined: Wed Dec 22, 2004 1:02 am
- Location: Burlington, NC
Walter Williams did OK but our team had fun. I can't eat a whole lot during a quiz bowl tournament or it makes me sick, so I enjoyed the bagels after we lost to Cary Academy A ;)
I agree that the tourmanemt was very well-run. I think it's the first tournament I've been to that was AHEAD of schedule!
I agree that the tourmanemt was very well-run. I think it's the first tournament I've been to that was AHEAD of schedule!
Wow, very impressive Eric on guessing the URL. I rearranged the directory structure for additional stats and then added the index file before a few of my friends decided I had to accompany them to Cary Town Center...while I was very close to falling on my nose...on just about 1 hour asleep the night before. Thank goodness I wasn't the tournament director!
Now for the results....
As Eric mentioned, the stats are now located at http://www4.ncsu.edu/~paking/ncata/2006 ... whitebowl/
Results:
Playoffs
1. Raleigh Charter A (6-0, 600.0ppg) 400, 8. Robinson (4-2, 198.3ppg) 120
4. Cary Academy A (5-1, 359.2ppg) 330, 5. Walter Williams (5-1, 344.2ppg) 205
3. Cary Academy B (6-0, 346.7ppg) 265, 6. St. Mary's (4-2, defeated Richmond Senior 250-200, 240.0ppg) 190
2. Raleigh Charter B (6-0, 359.2ppg) 215, 7. Richmond Senior (4-2, 210.8ppg)
Semifinals
Raleigh Charter A 570, Cary Academy A 180
Cary Academy B 310, Raleigh Charter B 185
Finals
Raleigh Charter A 525, Cary Academy B 65
Top individuals prelims
1. Will Schultz (Raleigh Charter A) 117.50ppg
2. Nick Tarleton (Cary Academy B) 90.00ppg
3. Bryan Brooks (Arendell Parrott) 73.33ppg
4. Nancy Vanderveer (St. Mary's) 72.50ppg
5. Mark Hallen (Cary Academy A) 66.00ppg
Neg champion--Nancy Vanderveer (St. Mary's)--9. She added 2 more in the playoffs.
Thanks to all the teams who came out today and our wonderful staff. We were very excited to see so many teams there.
I'll discuss the issue about the division strength in a forthcoming post.
Patrick King
Now for the results....
As Eric mentioned, the stats are now located at http://www4.ncsu.edu/~paking/ncata/2006 ... whitebowl/
Results:
Playoffs
1. Raleigh Charter A (6-0, 600.0ppg) 400, 8. Robinson (4-2, 198.3ppg) 120
4. Cary Academy A (5-1, 359.2ppg) 330, 5. Walter Williams (5-1, 344.2ppg) 205
3. Cary Academy B (6-0, 346.7ppg) 265, 6. St. Mary's (4-2, defeated Richmond Senior 250-200, 240.0ppg) 190
2. Raleigh Charter B (6-0, 359.2ppg) 215, 7. Richmond Senior (4-2, 210.8ppg)
Semifinals
Raleigh Charter A 570, Cary Academy A 180
Cary Academy B 310, Raleigh Charter B 185
Finals
Raleigh Charter A 525, Cary Academy B 65
Top individuals prelims
1. Will Schultz (Raleigh Charter A) 117.50ppg
2. Nick Tarleton (Cary Academy B) 90.00ppg
3. Bryan Brooks (Arendell Parrott) 73.33ppg
4. Nancy Vanderveer (St. Mary's) 72.50ppg
5. Mark Hallen (Cary Academy A) 66.00ppg
Neg champion--Nancy Vanderveer (St. Mary's)--9. She added 2 more in the playoffs.
Thanks to all the teams who came out today and our wonderful staff. We were very excited to see so many teams there.
I'll discuss the issue about the division strength in a forthcoming post.
Patrick King
Last edited by pakman044 on Sun Mar 19, 2006 1:56 am, edited 1 time in total.
One reminder before I completely fall on my nose: As per NAQT policy, please do NOT discuss the contents of the question sets (IS-57A) publicly until NAQT clears the question sets for discussion. This set will be used through the end of May, so please respect the wishes of NAQT. (Obviously you can discuss them internally within your team!)
Patrick King
Patrick King
Our kids and I had a wonderful time today - not just because we turned in our best performance of the year thus far, but because the tournament was SO efficient and well-run.
I liked the question set, though some of the questions were pretty much softballs - but I guess that's how the A-sets are supposed to be. The moderators were good, everyone knew the rules, the schedule was adhered to, and as someone said IT FINISHED AHEAD OF SCHEDULE! We have a 2.5-hour drive home and we were back at school by 6:00.
Thanks again to Paul, Patrick, and everyone else who made this happen.
I liked the question set, though some of the questions were pretty much softballs - but I guess that's how the A-sets are supposed to be. The moderators were good, everyone knew the rules, the schedule was adhered to, and as someone said IT FINISHED AHEAD OF SCHEDULE! We have a 2.5-hour drive home and we were back at school by 6:00.
Thanks again to Paul, Patrick, and everyone else who made this happen.
This was a well-run tournament with good readers all around who knew the rules. The facility was nice and quiet, and I think the question set was an appropriate level for the field of teams. The scores by and large, I think, reflected a nice balance between letting teams show some knowledge and offering a challenge.
And I've never seen such large cream cheese tubs before. God bless America.
Great tournament. We look forward to coming back next year!
Eric
And I've never seen such large cream cheese tubs before. God bless America.
Great tournament. We look forward to coming back next year!
Eric
-
- Wakka
- Posts: 151
- Joined: Sun Feb 27, 2005 12:29 pm
- Location: Saint Mary's School, Raleigh, NC
- radiantradon
- Lulu
- Posts: 41
- Joined: Wed Dec 22, 2004 1:02 am
- Location: Burlington, NC
SQBS is coldly logical and knows nothing about A and B teams, team strengths, and so on. Should "random" mean "We didn't deliberately screw/reward any team in the pairings" or "We don't care who plays who from round to round?" I personally like the former. When I set up the pairings for the RTO, I used the SQBS "random pairing" function and then looked over the results to eliminate certain types of pairings (not specific pairings). I did not review schedules for any kind of equivalence, but I would really hate to see an undeserving team skate into the finals while good, hard working, red blooded American teams get chainsawed in a Bracket of Death. That's the bottom line.
After looking at the divisions more closely, it seems that they are better balanced, at least on paper, than they seemed at first. Each division did contain what I would consider 3-4 good teams (based on what I've seen). Some appeared in the end to be more competitive than others, but I think in retrospect they were OK. I would have swapped someone from the top 4 in Tompkins with someone from the bottom of Winston, probably. Tompkins was tough. Terry Sanford is a better team than 3-3, and their 3 losses were by 2 questions or so each (with decent bonus coversion--they averaged 19).
Eric
After looking at the divisions more closely, it seems that they are better balanced, at least on paper, than they seemed at first. Each division did contain what I would consider 3-4 good teams (based on what I've seen). Some appeared in the end to be more competitive than others, but I think in retrospect they were OK. I would have swapped someone from the top 4 in Tompkins with someone from the bottom of Winston, probably. Tompkins was tough. Terry Sanford is a better team than 3-3, and their 3 losses were by 2 questions or so each (with decent bonus coversion--they averaged 19).
Eric
I guess I should really explain how things were done for the preliminary rounds. I've been a bit on the busy side the last week or so, and it's going to get more so this coming week, so this is the best chance I have to actually get this written....
For the preliminary rounds, I took each of the 21 teams and assigned them a number using the command :randInt(1,24) on my TI-86 (essentially producing a random integer in the interval [1,24]). Thus teams with numbers 1-7 were in Caldwell, 9-15 were in Tompkins, and 17-23 were in Winston. The only proviso on the process was that we decided that two teams from the same school would not be paired in the same division (if the number generated placed two teams in such a situation, I would simply generate another number). The team numbers themselves are mostly transparent to the end user; they were only used to determine a team's division, when their bye was, and when they would play each other team.
Now I presume the next question will be, "why did (we) choose a random system?" The major reason is that the tournament director and I really did not believe it was appropriate to hand select the divisions. We believed that in selecting the divisions in a way to make them as "balanced" as possible, we are in effect creating divisions that produce outcomes that we expect to happen--in essence, the teams that we believed to be stronger would emerge from their divisions. Now this is all well and good, assuming that the judgment we could make as to relative team strengths was accurate. But we did not believe that we could make such a judgment in an accurate manner. Even though we had a number of competition results, the results don't say how good the teams are in general. The results after all reflect which teams show up on a given day, the format, and even the composition of the individual team, which often changes on a weekend-to-weekend basis. In the end, we are left to make a subjective judgment about how good the teams are, and we did not feel comfortable making that kind of judgment.
When I did the randomizations and looked at the divisions that were produced, I had a feeling that the divisions were not evenly "balanced". I have heard from different people observations that were actually different from my own about the source of the imbalances, which indicates that perhaps my initial feelings about how the divisions were imbalanced were wrong (which makes me feel that I likely would not have produced that much "better" of division breakdown by handpicking the divisions or doing some form of classful randomization where the "better" teams are placed in relative identical quantities across the divisions). But to remedy the potential imbalances, there were really only three good options that I could up with:
So there's the methodology and the justification. I'm definitely interested in finding another way to do business, but it has to be a way that keeps my preconceptions of how good everyone is out of the pairings.
Patrick King
For the preliminary rounds, I took each of the 21 teams and assigned them a number using the command :randInt(1,24) on my TI-86 (essentially producing a random integer in the interval [1,24]). Thus teams with numbers 1-7 were in Caldwell, 9-15 were in Tompkins, and 17-23 were in Winston. The only proviso on the process was that we decided that two teams from the same school would not be paired in the same division (if the number generated placed two teams in such a situation, I would simply generate another number). The team numbers themselves are mostly transparent to the end user; they were only used to determine a team's division, when their bye was, and when they would play each other team.
Now I presume the next question will be, "why did (we) choose a random system?" The major reason is that the tournament director and I really did not believe it was appropriate to hand select the divisions. We believed that in selecting the divisions in a way to make them as "balanced" as possible, we are in effect creating divisions that produce outcomes that we expect to happen--in essence, the teams that we believed to be stronger would emerge from their divisions. Now this is all well and good, assuming that the judgment we could make as to relative team strengths was accurate. But we did not believe that we could make such a judgment in an accurate manner. Even though we had a number of competition results, the results don't say how good the teams are in general. The results after all reflect which teams show up on a given day, the format, and even the composition of the individual team, which often changes on a weekend-to-weekend basis. In the end, we are left to make a subjective judgment about how good the teams are, and we did not feel comfortable making that kind of judgment.
When I did the randomizations and looked at the divisions that were produced, I had a feeling that the divisions were not evenly "balanced". I have heard from different people observations that were actually different from my own about the source of the imbalances, which indicates that perhaps my initial feelings about how the divisions were imbalanced were wrong (which makes me feel that I likely would not have produced that much "better" of division breakdown by handpicking the divisions or doing some form of classful randomization where the "better" teams are placed in relative identical quantities across the divisions). But to remedy the potential imbalances, there were really only three good options that I could up with:
- Handpick the divisions
- Rerandomize the divisions until I came up with something that appeared more or less balanced
- Let the current division makeup stand
So there's the methodology and the justification. I'm definitely interested in finding another way to do business, but it has to be a way that keeps my preconceptions of how good everyone is out of the pairings.
Patrick King