libgdx TiledMap Rendering Performance Issue

你说的曾经没有我的故事 提交于 2019-12-25 16:52:08

问题


So I had performance issues with my libgdx project and I tracked it down to the map rendering. I isolated the issue by creating an empty project and do as little as possible, just the map rendering. This is the code I came up with:

The Desktop start up project class:

package com.me.test;

import com.badlogic.gdx.backends.lwjgl.LwjglApplication;
import com.badlogic.gdx.backends.lwjgl.LwjglApplicationConfiguration;

public class Main {
public static void main(String[] args) {
    LwjglApplicationConfiguration cfg = new LwjglApplicationConfiguration();
    cfg.title = "performanceTest";
    cfg.useGL20 = false; // doesn't make a difference...
    cfg.width = 1080;
    cfg.height = cfg.width/12 * 9; // 810

    new LwjglApplication(new Test(), cfg);
}
}

The actual Code:

import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.GL10;
import com.badlogic.gdx.graphics.OrthographicCamera;
import com.badlogic.gdx.maps.tiled.TmxMapLoader;
import com.badlogic.gdx.maps.tiled.renderers.OrthogonalTiledMapRenderer;

public class Test implements ApplicationListener {
private OrthographicCamera camera;
private com.badlogic.gdx.maps.tiled.TiledMap map;
private static  OrthogonalTiledMapRenderer renderer;

@Override
public void create() {      
    float w = Gdx.graphics.getWidth();
    float h = Gdx.graphics.getHeight();

    camera = new OrthographicCamera(w, h);

    TmxMapLoader maploader = new TmxMapLoader();

    map = maploader.load("test.tmx");
    renderer = new OrthogonalTiledMapRenderer(map, 1);
    renderer.setView(camera);

}

@Override
public void dispose() {
    renderer.dispose();
    map.dispose();
}

@Override
public void render() {      
    Gdx.gl.glClearColor(1, 1, 1, 1);
    Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);

    renderer.render();
}

@Override
public void resize(int width, int height) {
}

@Override
public void pause() {
}

@Override
public void resume() {
}
}

Now the weird thing is: not only is the CPU usage ridiculously high, but it makes a jump when more than 128 have to be rendered. Below 129 tiles the performance is always the same. ( Only this process takes about 2-4% ) But rendering 129 tiles or more takes about 40 - 60%! Now my question is: why is that? Am I doing something wrong? I can't imagine the renderer from libgdx would have such a fatal flaw... and making a game only using 128 tiles on screen isn't an option :)

Thanks for any answers or thoughts!

Environment:

  • Eclipse Kepler Service Release 1
  • libgdx version 0.9.9
  • Windows 7
  • Graphics Chip: NVIDIA GeForce GTX 560 Ti
  • CPU: Pentium Dual-Core 2.70GHz

Rendering 128 Tiles:

Rendering 129 Tiles:


回答1:


I got the solution:

Just add

Mesh.forceVBO=true;

before the app starts.

first of all this is probably a hardware issue. On other computers everything runs smoothly but there have been others that had the same issue. On the BadlogicGames Forum I found the answer more details are there:

Badlogic Forum Post



来源:https://stackoverflow.com/questions/21886121/libgdx-tiledmap-rendering-performance-issue

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!