问题
Processing was designed to make drawing with Java much easier. Processing for Android has the power of its desktop sibling plus information from sensors. Putting these things together, shouldn't it be easy to display a stereoscopic image and move around it like Oculus Rift or Google Cardboard?
回答1:
The code below displays an image in two viewports - one for the left eye and one for the right eye. The result is that the image looks 3D when viewed from a Google Cardboard device. Accelerometer and gyroscope data are used to move the 3D image as the head is moved around. The only bug is that of Processing for Android in that Landscape mode makes the program crash if you do not start it in this mode. I am using Processing 2.0.3 and Android 4.3, so this problem may have been addressed in current versions. (Although I did see it was still an open issue in Processing-Bugs discussion on Github). The texture image is a 100 x 100 pixel image of a favorite cartoon character. You can use whatever you want – just store the image in the data folder.
//Scott Little 2015, GPLv3
//pBoard is Processing for Cardboard
import android.os.Bundle; //for preventing sleep
import android.view.WindowManager;
import ketai.sensors.*; //ketai library for sensors
KetaiSensor sensor;
float ax,ay,az,mx,my,mz; //sensor variables
float eyex = 50; //camera variables
float eyey = 50;
float eyez = 0;
float panx = 0;
float pany = 0;
PGraphics lv; //left viewport
PGraphics rv; //right viewport
PShape s; //the object to be displayed
//********************************************************************
// The following code is required to prevent sleep.
//********************************************************************
void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
// fix so screen doesn't go to sleep when app is active
getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
}
//********************************************************************
void setup() {
sensor = new KetaiSensor(this);
sensor.start();
size(displayWidth,displayHeight,P3D); //used to set P3D renderer
orientation(LANDSCAPE); //causes crashing if not started in this orientation
lv = createGraphics(displayWidth/2,displayHeight,P3D); //size of left viewport
rv = createGraphics(displayWidth/2,displayHeight,P3D);
PImage img = loadImage("jake.jpg"); //texture image
s = createShape();
TexturedCube(img, s, 50, 50);
}
void draw(){
//draw something fancy on every viewports
panx = panx-mx*10;
pany = 0;
eyex = 0;
eyey = -20*az;
ViewPort(lv, eyex, eyey, panx, pany, -15); //left viewport
ViewPort(rv, eyex, eyey, panx, pany, 15); //right viewport
//add the two viewports to your main panel
image(lv, 0, 0);
image(rv, displayWidth/2, 0);
}
//sensor data
void onAccelerometerEvent(float x, float y, float z){
ax = x;
ay = y;
az = z;
}
void onGyroscopeEvent(float x, float y, float z){
mx = x;
my = y;
mz = z;
}
//
void ViewPort(PGraphics v, float x, float y, float px, float py, int eyeoff){
v.beginDraw();
v.background(102);
v.lights();
v.pushMatrix();
v.camera(x+eyeoff, y, 300, px, py, 0, 0.0, 1.0, 0.0);
v.noStroke();
//v.box(100);
v.shape(s);
v.popMatrix();
v.endDraw();
}
//put a texture on PShape object, 6 faces for a cube
void TexturedCube(PImage tex, PShape s, int a, int b) {
s.beginShape(QUADS);
s.texture(tex);
// +Z "front" face
s.vertex(-a, -a, a, 0, b);
s.vertex( a, -a, a, b, b);
s.vertex( a, a, a, b, 0);
s.vertex(-a, a, a, 0, 0);
// -Z "back" face
s.vertex( a, -a, -a, 0, 0);
s.vertex(-a, -a, -a, b, 0);
s.vertex(-a, a, -a, b, b);
s.vertex( a, a, -a, 0, b);
// +Y "bottom" face
s.vertex(-a, a, a, 0, 0);
s.vertex( a, a, a, b, 0);
s.vertex( a, a, -a, b, b);
s.vertex(-a, a, -a, 0, b);
// -Y "top" face
s.vertex(-a, -a, -a, 0, 0);
s.vertex( a, -a, -a, b, 0);
s.vertex( a, -a, a, b, b);
s.vertex(-a, -a, a, 0, b);
// +X "right" face
s.vertex( a, -a, a, 0, 0);
s.vertex( a, -a, -a, b, 0);
s.vertex( a, a, -a, b, b);
s.vertex( a, a, a, 0, b);
// -X "left" face
s.vertex(-a, -a, -a, 0, 0);
s.vertex(-a, -a, a, b, 0);
s.vertex(-a, a, a, b, b);
s.vertex(-a, a, -a, 0, b);
s.endShape();
}
回答2:
It will be easy to display a bad stereoscopic image. There are reasons for the Oculus team to take so long to make it happen ;)
First of all, you need to know that people cross their eyes to varying degrees to focus their eyesight on objects that are near/far from them. If you set up your cameras perfectly parallel, everything will look good only when the user focuses on infinity. If you turn each camera a bit in a set manner, without eye-tracking, you get toe-in stereo, used in 3D movies, suffering from keystone effect. The best thing to do without proper eye tracking is to use a skewed camera frustrum - off axis projection. More on this can be found here.
There are also other problems. For example, when you turn your head, you don't just change yoru eyes orientation - you also change their absolute position in 3D space. If you simply apply your phones rotation to your cameras, the effect will be off. That's why you should use at least a head model. The current version of the CardboardSDK supports modelling a neck, to take vertical translation when looking up/down into account.
There are many other problems, such as the pincussion distorition of the image due to the headsets lenses, headtracking, calibrating everything to a specific phone, headset, lenses, the users InterPupillaryDistance... The list goes on.
All in all, no, VR is not a trivial or simple matter. The problem is, when done badly, it isn't immediately obvious. The users may not conciously know that there is something wrong, but their brain will know. It's been trained all their lives in interpeting the reality around it, and it's good at knowing when something is off. Badly done vr apps may cause disorientation, headaches, eye strain, nausea or simply provide an unsatisfying experience. Some of the big players of the VR world are afraid that many badly done VR apps will be a bad experience for a lot of people, scaring them away from the technology and preventing it from becoming popular.
All in all, if you want to do VR, either make sure you REALLY know what you are doing, or use an SDK/framework made by specialists.
来源:https://stackoverflow.com/questions/28779552/how-do-you-use-processing-for-android-to-display-a-stereoscopic-image-in-a-googl