Toolserver:Admin:Shopping list
Appearance
This page was moved from the Toolserver wiki.
Toolserver has been replaced by Toolforge. As such, the instructions here may no longer work, but may still be of historical interest.
Please help by updating examples, links, template links, etc. If a page is still relevant, move it to a normal title and leave a redirect.
Short term expansion plans
[edit]Things we would like to buy to support expansion for the next year or two, in chronological order. See below for rationale.
- (+) New FC switch (E3000) - for redundancy/multipathing
- (+) 3 8Gb/s FC HBAs, for adenia, thyme and rosemary. These will be the next database servers to run out of space. (~E800 each)
- (+) FC HBA for ptolemy; it's okay for disk space for now, but it could do with more spindles.
- (-) A copy of the existing array (E15,000) - redundancy and improved performance (2x for reads)
- (+) 2 extra SAS trays (24x 300G 15k) for each array. This will give 5.4T total storage, which is enough to add disk space to all servers currently needing it, and move enwiki and probably another cluster entirely to the array. (E10,000)
- (+) Extra 8-port license for the switch (unknown cost, but not much)
- Veritas SFHA license for the cluster (~EUR4500)
- 2x FC HBAs for cluster (~EUR3000)
- (-) 2 extra database servers, with 2 local disks, 192GB (or more) RAM and an FC HBA, to be virtualised database servers with storage on the array. ~E10000 each (?)
- This will free at least two existing X4250s to be login servers or something else.
- Example: HP DL165, 24x 2.1GHz cores, 192GB RAM, US$14,874 list
Some other things we might want to buy, roughly in order of most important first.
- two more stackable switches so we can have two redundant switches per rack
- a spare or two for each type of disk we might need
- MySQL support|: EUR430/server/year basic (2 incidents), or EUR1'500/server/year silver
(+) = budgeted (-) = chapters
- a new database that could store the complete text in uncompressed form for easy access by users (~5-10TB?)
could use zedler's old SATA array for this- Let's not do that, the old array sucks. Could put it on the SATA SAN storage.
- a new database that could store the DBpedia data. Find out how to get that into MySQL, and how much it would be. This should actually be paired with the respective wiki databases, on the same servers.
- a new database that could store all the access data collected from the squids. This should actually be paired with the respective wiki databases, on the same servers.
Rationale
[edit]- Another FC switch:
- Two completely separate FC networks with all storage/hosts connected to both.
- Provides redundancy in case one fails, power failure, etc.
- Hosts access storage over both paths (multipathing) -- standard configuration for SAN
- Improved performance (up to 16Gb/s over both paths) but this is unlikely to matter to us
- Another array identical to the HP one, to mirror the data over.
- A redundant storage setup will become more important once most of our storage depends on the array. An issue with one array could bring down the entire platform.
- Increased (2x) read performance, the main bottleneck for databases.
- No increase in storage (arrays are mirrored); but with 300GB SAS disks, storage probably isn't an issue
- Requires that all future storage be added to both arrays to maintain redundancy.
- More storage for the array.
- The plan is to grow in future by adding storage to the array, instead of buying servers with local disks.
- High initial cost but eventually it's much more cost efficient than using local disks in each server, which wastes unused space.
- Spreading all data over all disks makes better use of available I/O capacity (instead of one server being 100% loaded and the other 10%)
- This is basically a permanent wishlist item; we can scale the storage forever.
- After 7 enclosures, need to buy another array head. Might do this before then for performance reasons.
- New database servers (start with 2) without local storage.
- These will be connected to the array.
- Needs more storage on the array first (only has 12 SAS disks at the moment; also not fast enough)
- Put lots of memory (192GB+) in each server and run several MySQL instances on each one with zones. Better use of resources (CPU), cheaper.
- Existing databases will be re-purposed as login servers (or special projects) with local user-accessible storage