Below 12 or 15 FPS, the human eye notices the frame switch, which results in a flicker effect. And the slower it gets, the closer to a slide show cinema gets. Early silent cinema started at around 15 or 18 FPS, thus was barely meeting the human vision threshold to achieve the illusion of movement from a succession of still images. Anything above 18 FPS rate looks perfectly smooth and in "live" motion, including the slight apparent "breathing" caused by the photochemical grains of a static shot of a perfectly still landscape.
The standard projection rate of 24FPS made silent cinema (shot at a lower rate) look faster, which led to the "Benny Hill chase" effect of characters moving faster than usual. Not to mention that early cameras were operated manually. Operators spun the film stock with a hand crank, following a metronom or by singing a tune with a regular rhythm. So the footage was more or less captured at a reasonably constant speed, but not perfectly, while the projection was automated with a very regular engine. Thus the possible discrepency in actual recorded/projected speed.
There is nothing sacred, optimal, mythical about the 24FPS, no matter what Godard declares... (see When do Images Turn Into Cinema?) It was a rather random standard defined by the industry (for practical and economical reasons). And just like the film strip is not ontologically attached to the invention of cinema, the 24FPS is not any more defining "true cinema" than any other operative frame rate. The HFR is only another technology-based "look", like the transition from Black&White to colour caused much emotions amongst filmmakers as well as audience, or the drop of the Technicolor typical "look" for truer colours, or the more recent "TV-look" of the more perfect digital image. We're not used to sudden change, so we cling to familiar aesthetic, nurtured by nostalgia. If HFR looks different, it's mostly because it has been used to the maximum of its possibility, to show off a "life-like" image. But it would be easy to apply post-production effects and voluntarily "degrade" this perfection with the traditional filters in order to recreate the vintage "celluloid look". HFR will fix the problems inherent with camera capture at slow speed rate, like motion blur, and will offer a wider range of on screen aesthetics, from "reality-through-a-window" to "HDTV" to vintage "physical filmstrip", to primitive silent cinema...
When 3D cinema creates an illusion of depth with the projection technology of 2D cinema, at the same frame rate, it renders 3D films darker, obviously, because each eye receives twice as less light per second as a 2D film (whichever frame rate is used). Whether the 3D stereoscopic technology is based on anaglyph light (red/cyan, or green/magenta), polarized light or mechanically shuttered with synchronised glasses, the projected film is either dimmed by the colour/polarized filters, or by the occultation of each eye alternatively. Even the latest 3D technology cannot escape that effect because it needs to cram 2 dissociated films (the film viewed from the left eye vantage point, and the film viewed by the right eye vantage point) to restore the normal stereotypical human vision. Every other frame is whitened out (anaglyph), blackened out (polarized), blocked out (shuttered glasses) for a given eye so that it only sees the frames corresponding to its appropriate film. The rest of the time, that eye doesn't see anything (while the other eye is exposed to an image, however fast is that image exposition, or in-projector double(triple)-flashing).
If you take off your 3D glasses during the projection of a 3D movie, all you see is a blurry combination of both Left and Rigth eye images merged at 24FPS, because the human eye is incapable to distinguish an alternation of 2 films 24 times a second. The brain perceives only one stream of visual input which dances left and right a few centimeters 24 times a second, which appears like an out-of-focus defect. The blur traduces the vision has exceeded the perceptual threshold, not only at the movies, but in real life too. If you wring your finger very fast in front of your eye, or if you look at a bicycle wheel, it becomes blurry (or jittery) when the motion exceeds the limits of human vision (the processing time of our brains).
If you close one eye, with the 3D glasses on, you only see 1 stream of images destined to the eye opened, and is blocked half the time, 12 times a second (or 24 times with HFR). Thus there is no blur due to the merger of both films. It only turns a stereoscopic cinema (3D) into a monoscopic cinema (2D), only reduced to 12FPS. At 24FPS, a 2D film switch frame 24 times per second, spending a minimal lapse (maybe less than 10% of 1/24th of a second) blocked out while the projector mechanism moves the next still image in place. But in a 3D movie, each eye sees a film that spends exactly 1/24th of a second blocked out (while the other eye is exposed to an image), and this 12 times per second. So if you close one eye, you will see a clear 2D movie (from only one of the 2 vantage points provided by a 3D movie), but darker than usual, because it spends half the time blocked out, even if at 12FPS, we can't really notice this swift occultation.
So overall, the brains adds up the visual input from both eyes and mix them into on single stereoscopic film, however, adding 2 unilateral films at 12FPS does not make a unified film at 24 FPS... it only drops the final frame rate to 12FPS even if the original film print is projected at 24FPS.
The transition from physical film print (film reel) to digital projection (DCP) meet the exact same problem, because the technological leap does not affect this aspect. The "silver screen" is a metalic screen, more reflective than a simple white sheet, therefore reverberates more light (or more exactly, dissipates less incoming light upon rebound) for the spectator. This partially improves the darker images issue, but doesn't address the frame rate problem.
At such a low frame rate, the camerawork is highly sensitive to violent movements. Maybe the popularisation of the shaky cam is a way to familiarize the audience more with the blurry motion at 12FPS. The lateral panoramic camera movements especially result in a jittery image, discontinuous, blurry. This technical limitation (combined with the intensive use of CGI which also needs to hide the seams) might explain why the last decade has develop a cinema aesthetic based on less-than-perfect motion (shaky, blurry, explosive, contradictory), and accelerated editing (intensified continuity, chaos cinema, always shorter bursts of integral continuity), as well as less wide camera movements between cuts (no plan sequence, long takes, magestic tracking shots, panorama...). The flyover shot (airborne camera) is less troublesome because there is no foreground reference, all the image is in a distant background, were displacement within the frame is much slower. Only the foreground in a lateral pan, crossing the frame quickly (in a couple frames) will appear jittery.
It's not a coincidence if Peter Jackson wanted to shoot his 3D movie (The Hobbit) at 48FPS, it doubles the frame rate of a 24FPS 3D movie, thus restores the usual 24FPS rate for each eye, and will consequently make the experience brighter (or as bright as it was with a 24FPS 2D film). There is no problem with that (or eventually for the inexperienced digital projectionists).
And if James Cameron wants to impose a new standard at 60 FPS, it will match the old standard and raises it by 6FPS for each eye (corresponding to a 2D film at 30FPS instead of 24, an improvement of +25%). Same correction. No big deal.
Although, I'm not sure a higher frame rate will fix the luminosity issue, since as many frames you can project per second, there is always a blocked out screen for each eye, exactly half the time. But it will definitely improve the rendition of motion on screen.
If cinema projection was to attain 1000FPS, it wouldn't change the ontological identity of cinema, on a film strip or in digital projection. This process is only to restore a synthetized motion from still pictures (analytical motion), and could be achieved through many different mechanisms... The concept of cinema is only that : to retranscribe motion, to show one single image, "alive", moving seamlessly, to show as many perceptual changes we can notice in real life with the limitation of our biological eye. At 1000FPS, most of the gain in motion quality will not even register for a human eye, so it would be a waste (there is also an upper thresold above which the human eye cannot perceive more information per second), but it could possibly make the experience more comfortable, less tiring, and certainly less jittery in certain circumstances (like with the lateral pan foreground, whip-zoom, drive-by, foreground during tracking shots, and smoother rolling of onscreen credits (when text is moving rapidly in the plane of the screen).
An ACTUALLY high frame rate (more than 24FPS for either eye), will make the capture of motion more precise during shooting (by sampling more still images along the trajectory), and make the display smoother during the projection. So there is no rational reasons to oppose the natural evolution of the technology, even if this technological transition is abused by the wealthy studios to anihilate the niche markets that strive off of low-tech, retro-tech projection.
Just like the CD killed the vinyl, the VHS killed the 16mm, the DVD killed the VHS, the BluRay is killing the DVD, the VoD is gonna kill the physical print, like the DCP will entirely replace all the local institutions based on lending/renting of physical film prints. The diversity of offering and the access to the available stock is the real concern, not the necessary improvements of a technology that has barely changed since 1894!