Well, a couple of questions, are the machines on identical/exceptionally similar hardware configurations? If so you can change /etc/fstab to reference /dev/sdX instead of UUID's and then use dd to create a raw image of the disk and use that for all the machines (for the most part, still gotta change hostnames and such)
If not, you can create unattended installation scripts and add them to the install disc and have it set everything up for you.
It sounds to me like the Unattended install option is more likely appropriate, but this is more for multiple manual installations, for mass deployment via network you could make the image bootable via PXE.
Another great option, and actually a project tailored for this specific purpose is Fully Automated Installer (FAI) which is available in the repos. You can find more info here:
A lot of people seem to agree with FAI as the end-all solution to massive hands-free deployment. Pretty much unlimited scaling etc. Serverfault peeps like it, too.
Personally, my decision would depend on the project at hand, for my Raspberry Pi computing cluster I am using a full image of the primary system, modified for each secondary system in the cluster, and manually editing the hostname etc. If I was installing a pre-determined set of packages on vastly varying hardware at random intervals (i.e. producing Ubuntu based machines build-to-order) I would use an unattended install disc. If I was deploying Ubuntu to a larger cluster of servers or for a datacenter purpose I would use FAI. Especially since you can integrate other admin tools with FAI installs to fully automate creating remotely administrated systems very rapidly.
From the basic sound of it though - you want FAI.