Originally Posted by AddictedGamer93
The world is limited to the size of your map.
Yup, most likely because the Xbox 360 is limited to 512MB of System RAM and the fact that Xboxs may not have a hard-drive or storage space. If Microsoft had required that all Xbox 360s have a hard-drive, then the game could have had as large of a map as the hard drive had space.
Assuming each block of space in the world is using 8-bits of memory/space, a 1km x 1 km x 128m world height (I'm assuming the Xbox version is using 128 world height instead of the 256 allowed by newer PC versions) would be 128,000,000 blocks of space and 128,000,000 bytes of memory, or 125,000 kilobytes, or 122 megabytes. I'm assuming they're using some kind of compression scheme though, by partitioning blocks of similar space (like air), they do not have to define memory for every empty block of space. Anyway, long story short if that the maps take up a good amount of space, so I'm not surprised the map size is limited.
Thanks, but that didn't help much, the only info about it on that page was this: "C# (XNA Framework)/C++(?)"
The Xbox 360 CLR is extremely slow, even compared to an older PC. For example, the floating point math performance of the Xbox CLR (which is only used with XNA programs), was about 1/10th as fast as my Intel e8400. This is compounded by the fact that Minecraft uses 64-bit floating point precision for all positions in the game (unless the 360 version has less precision than the Java version). Programs written in XNA for the Xbox also cannot have any "unsafe" code, which means things like pointer manipulation is off-limits. A lot of low-level optimizations that would be done require "unsafe" code. Also, XNA programs do not have access to as many processor threads. And finally, the Xbox CLR (C#/XNA) has a very slow garbage collector, which causes huge performance hits every time it collects garbage. Programs can get around this, but it can be a pain. Based on all of these things I would imagine it's written in C++, and not C# (XNA).
Edit: Just to be clear, it's not that C# is slow, it's that the Xbox CLR is slow (very slow), which makes making high performance games difficult on the Xbox. As an example, I've written a game engine that ran at about 120 fps on my Radeon 9800se, and about 40 fps on the Xbox. The discrepancy wasn't the GPU, as the Xbox has a much faster GPU than a 9800se.
I don't believe Java's performance is much different than C#, both use a virtual machine (CLR vs. JVM), both have JIT compilers. Here's a more thorough explanation:
Generally, C# and Java can be just as fast or faster because the JIT compiler -- a compiler that compiles your IL the first time it's executed -- can make optimizations that a C++ compiled program cannot because it can query the machine. It can determine if the machine is Intel or AMD; Pentium 4, Core Solo, or Core Duo; or if supports SSE4, etc.
A C++ program has to be compiled beforehand usually with mixed optimizations so that it runs decently well on all machines, but is not optimized as much as it could be for a single configuration (i.e. processor, instruction set, other hardware).
Additionally certain language features allow the compiler in C# and Java to make assumptions about your code that allows it to optimize certain parts away that just aren't safe for the C/C++ compiler to do. When you have access to pointers there's a lot of optimizations that just aren't safe.
Also Java and C# can do heap allocations more efficiently than C++ because the layer of abstraction between the garbage collector and your code allows it to do all of its heap compression at once (a fairly expensive operation).
Now I can't speak for Java on this next point, but I know that C# for example will actually remove methods and method calls when it knows the body of the method is empty. And it will use this kind of logic throughout your code.
So as you can see, there are lots of reasons why certain C# or Java implementations will be faster.
Now this all said, specific optimizations can be made in C++ that will blow away anything that you could do with C#, especially in the graphics realm and anytime you're close to the hardware. Pointers do wonders here.
So depending on what you're writing I would go with one or the other. But if you're writing something that isn't hardware dependent (driver, video game, etc), I wouldn't worry about the performance of C# (again can't speak about Java). It'll do just fine.
One the Java side, @Swati points out a good article:
Here's a quote from that last link in the above quote:
Edited by lordikon - 5/9/12 at 10:20pm
Pop quiz: Which language boasts faster raw allocation performance, the Java language, or C/C++? The answer may surprise you -- allocation in modern JVMs is far faster than the best performing malloc implementations. The common code path for new Object() in HotSpot 1.4.2 and later is approximately 10 machine instructions (data provided by Sun; see Resources), whereas the best performing malloc implementations in C require on average between 60 and 100 instructions per call (Detlefs, et. al.; see Resources). And allocation performance is not a trivial component of overall performance -- benchmarks show that many real-world C and C++ programs, such as Perl and Ghostscript, spend 20 to 30 percent of their total execution time in malloc and free -- far more than the allocation and garbage collection overhead of a healthy Java application (Zorn; see Resources).
JVMs are surprisingly good at figuring out things that we used to assume only the developer could know. By letting the JVM choose between stack allocation and heap allocation on a case-by-case basis, we can get the performance benefits of stack allocation without making the programmer agonize over whether to allocate on the stack or on the heap.