Recherche avancée

Médias (0)

Mot : - Tags -/formulaire

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (26)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Création définitive du canal

    12 mars 2010, par

    Lorsque votre demande est validée, vous pouvez alors procéder à la création proprement dite du canal. Chaque canal est un site à part entière placé sous votre responsabilité. Les administrateurs de la plateforme n’y ont aucun accès.
    A la validation, vous recevez un email vous invitant donc à créer votre canal.
    Pour ce faire il vous suffit de vous rendre à son adresse, dans notre exemple "http://votre_sous_domaine.mediaspip.net".
    A ce moment là un mot de passe vous est demandé, il vous suffit d’y (...)

Sur d’autres sites (5331)

  • Recording a video using MediaRecorder

    21 juillet 2016, par Cédric Portmann

    I am currently using the TextureFromCameraActivity from Grafika to record a video in square ( 1:1 ) resolution. Therefor I the GLES20.glViewport so that the video gets moved to the top and it appears to be squared. Now I would like to record this square view using the MediaRecorder or at least record the camera with normal resolutiona and then crop it using FFmpeg. However I get the same error over and over again and I cant figure out why.

    The error I get :

    start called in an invalid state : 4

    And yes I added all the necessary permissions.

    android.permission.WRITE_EXTERNAL_STORAGE android.permission.CAMERA
    android.permission.RECORD_VIDEO android.permission.RECORD_AUDIO
    android.permission.STORAGE android.permission.READ_EXTERNAL_STORAGE

    Here the modified code :

    https://github.com/google/grafika

    Thanks for your help :D

    package com.android.grafika;

    import android.graphics.SurfaceTexture;
    import android.hardware.Camera;
    import android.media.CamcorderProfile;
    import android.media.MediaRecorder;
    import android.opengl.GLES20;
    import android.opengl.Matrix;
    import android.os.Bundle;
    import android.os.Environment;
    import android.os.Handler;
    import android.os.Looper;
    import android.os.Message;
    import android.util.Log;
    import android.view.MotionEvent;
    import android.view.Surface;
    import android.view.SurfaceHolder;
    import android.view.SurfaceView;
    import android.view.View;
    import android.widget.Button;
    import android.widget.SeekBar;
    import android.widget.TextView;
    import android.app.Activity;
    import android.widget.Toast;

    import com.android.grafika.gles.Drawable2d;
    import com.android.grafika.gles.EglCore;
    import com.android.grafika.gles.GlUtil;
    import com.android.grafika.gles.Sprite2d;
    import com.android.grafika.gles.Texture2dProgram;
    import com.android.grafika.gles.WindowSurface;

    import java.io.File;
    import java.io.IOException;
    import java.lang.ref.WeakReference;


    public class TextureFromCameraActivity extends Activity implements View.OnClickListener, SurfaceHolder.Callback,
           SeekBar.OnSeekBarChangeListener {


       private static final int DEFAULT_ZOOM_PERCENT = 0;      // 0-100
       private static final int DEFAULT_SIZE_PERCENT = 80;     // 0-100
       private static final int DEFAULT_ROTATE_PERCENT = 75;    // 0-100

       // Requested values; actual may differ.
       private static final int REQ_CAMERA_WIDTH = 720;
       private static final int REQ_CAMERA_HEIGHT = 720;
       private static final int REQ_CAMERA_FPS = 30;

       // The holder for our SurfaceView.  The Surface can outlive the Activity (e.g. when
       // the screen is turned off and back on with the power button).
       //
       // This becomes non-null after the surfaceCreated() callback is called, and gets set
       // to null when surfaceDestroyed() is called.
       private static SurfaceHolder sSurfaceHolder;

       // Thread that handles rendering and controls the camera.  Started in onResume(),
       // stopped in onPause().
       private RenderThread mRenderThread;

       // Receives messages from renderer thread.
       private MainHandler mHandler;

       // User controls.
       private SeekBar mZoomBar;
       private SeekBar mSizeBar;
       private SeekBar mRotateBar;

       // These values are passed to us by the camera/render thread, and displayed in the UI.
       // We could also just peek at the values in the RenderThread object, but we'd need to
       // synchronize access carefully.
       private int mCameraPreviewWidth, mCameraPreviewHeight;
       private float mCameraPreviewFps;
       private int mRectWidth, mRectHeight;
       private int mZoomWidth, mZoomHeight;
       private int mRotateDeg;
       SurfaceHolder sh;
       MediaRecorder recorder;
       SurfaceHolder holder;
       boolean recording = false;

       public static final String TAG = "VIDEOCAPTURE";

       private static final File OUTPUT_DIR = Environment.getExternalStorageDirectory();


       @Override
       protected void onCreate(Bundle savedInstanceState) {
           super.onCreate(savedInstanceState);

           recorder = new MediaRecorder();



           setContentView(R.layout.activity_texture_from_camera);

           mHandler = new MainHandler(this);

           SurfaceView cameraView = (SurfaceView) findViewById(R.id.cameraOnTexture_surfaceView);
           sh = cameraView.getHolder();
           cameraView.setClickable(true);// make the surface view clickable
           sh.addCallback(this);


           //prepareRecorder();


           mZoomBar = (SeekBar) findViewById(R.id.tfcZoom_seekbar);
           mSizeBar = (SeekBar) findViewById(R.id.tfcSize_seekbar);
           mRotateBar = (SeekBar) findViewById(R.id.tfcRotate_seekbar);
           mZoomBar.setProgress(DEFAULT_ZOOM_PERCENT);
           mSizeBar.setProgress(DEFAULT_SIZE_PERCENT);
           mRotateBar.setProgress(DEFAULT_ROTATE_PERCENT);
           mZoomBar.setOnSeekBarChangeListener(this);
           mSizeBar.setOnSeekBarChangeListener(this);
           mRotateBar.setOnSeekBarChangeListener(this);

           Button record_btn = (Button)findViewById(R.id.button);
           record_btn.setOnClickListener(this);
           initRecorder();


           updateControls();




       }





       @Override
       protected void onResume() {
           Log.d(TAG, "onResume BEGIN");
           super.onResume();

           mRenderThread = new RenderThread(mHandler);
           mRenderThread.setName("TexFromCam Render");
           mRenderThread.start();
           mRenderThread.waitUntilReady();

           RenderHandler rh = mRenderThread.getHandler();
           rh.sendZoomValue(mZoomBar.getProgress());
           rh.sendSizeValue(mSizeBar.getProgress());
           rh.sendRotateValue(mRotateBar.getProgress());

           if (sSurfaceHolder != null) {
               Log.d(TAG, "Sending previous surface");
               rh.sendSurfaceAvailable(sSurfaceHolder, false);
           } else {
               Log.d(TAG, "No previous surface");
           }
           Log.d(TAG, "onResume END");
       }

       @Override
       protected void onPause() {
           Log.d(TAG, "onPause BEGIN");
           super.onPause();

           RenderHandler rh = mRenderThread.getHandler();
           rh.sendShutdown();
           try {
               mRenderThread.join();
           } catch (InterruptedException ie) {
               // not expected
               throw new RuntimeException("join was interrupted", ie);
           }
           mRenderThread = null;
           Log.d(TAG, "onPause END");
       }

       @Override   // SurfaceHolder.Callback
       public void surfaceCreated(SurfaceHolder holder) {
           Log.d(TAG, "surfaceCreated holder=" + holder + " (static=" + sSurfaceHolder + ")");
           if (sSurfaceHolder != null) {
               throw new RuntimeException("sSurfaceHolder is already set");
           }

           sSurfaceHolder = holder;

           if (mRenderThread != null) {
               // Normal case -- render thread is running, tell it about the new surface.
               RenderHandler rh = mRenderThread.getHandler();
               rh.sendSurfaceAvailable(holder, true);
           } else {
               // Sometimes see this on 4.4.x N5: power off, power on, unlock, with device in
               // landscape and a lock screen that requires portrait.  The surface-created
               // message is showing up after onPause().
               //
               // Chances are good that the surface will be destroyed before the activity is
               // unpaused, but we track it anyway.  If the activity is un-paused and we start
               // the RenderThread, the SurfaceHolder will be passed in right after the thread
               // is created.
               Log.d(TAG, "render thread not running");
           }

           recorder.setPreviewDisplay(holder.getSurface());

       }

       @Override   // SurfaceHolder.Callback
       public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
           Log.d(TAG, "surfaceChanged fmt=" + format + " size=" + width + "x" + height +
                   " holder=" + holder);

           if (mRenderThread != null) {
               RenderHandler rh = mRenderThread.getHandler();
               rh.sendSurfaceChanged(format, width, height);
           } else {
               Log.d(TAG, "Ignoring surfaceChanged");
               return;
           }
       }

       @Override   // SurfaceHolder.Callback
       public void surfaceDestroyed(SurfaceHolder holder) {
           // In theory we should tell the RenderThread that the surface has been destroyed.
           if (mRenderThread != null) {
               RenderHandler rh = mRenderThread.getHandler();
               rh.sendSurfaceDestroyed();
           }
           Log.d(TAG, "surfaceDestroyed holder=" + holder);
           sSurfaceHolder = null;
       }

       @Override   // SeekBar.OnSeekBarChangeListener
       public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) {
           if (mRenderThread == null) {
               // Could happen if we programmatically update the values after setting a listener
               // but before starting the thread.  Also, easy to cause this by scrubbing the seek
               // bar with one finger then tapping "recents" with another.
               Log.w(TAG, "Ignoring onProgressChanged received w/o RT running");
               return;
           }
           RenderHandler rh = mRenderThread.getHandler();

           // "progress" ranges from 0 to 100
           if (seekBar == mZoomBar) {
               //Log.v(TAG, "zoom: " + progress);
               rh.sendZoomValue(progress);
           } else if (seekBar == mSizeBar) {
               //Log.v(TAG, "size: " + progress);
               rh.sendSizeValue(progress);
           } else if (seekBar == mRotateBar) {
               //Log.v(TAG, "rotate: " + progress);
               rh.sendRotateValue(progress);
           } else {
               throw new RuntimeException("unknown seek bar");
           }

           // If we're getting preview frames quickly enough we don't really need this, but
           // we don't want to have chunky-looking resize movement if the camera is slow.
           // OTOH, if we get the updates too quickly (60fps camera?), this could jam us
           // up and cause us to run behind.  So use with caution.
           rh.sendRedraw();
       }

       @Override   // SeekBar.OnSeekBarChangeListener
       public void onStartTrackingTouch(SeekBar seekBar) {}
       @Override   // SeekBar.OnSeekBarChangeListener
       public void onStopTrackingTouch(SeekBar seekBar) {}
       @Override

       /**
        * Handles any touch events that aren't grabbed by one of the controls.
        */
       public boolean onTouchEvent(MotionEvent e) {
           float x = e.getX();
           float y = e.getY();

           switch (e.getAction()) {
               case MotionEvent.ACTION_MOVE:
               case MotionEvent.ACTION_DOWN:
                   //Log.v(TAG, "onTouchEvent act=" + e.getAction() + " x=" + x + " y=" + y);
                   if (mRenderThread != null) {
                       RenderHandler rh = mRenderThread.getHandler();
                       rh.sendPosition((int) x, (int) y);

                       // Forcing a redraw can cause sluggish-looking behavior if the touch
                       // events arrive quickly.
                       //rh.sendRedraw();
                   }
                   break;
               default:
                   break;
           }

           return true;
       }

       /**
        * Updates the current state of the controls.
        */
       private void updateControls() {
           String str = getString(R.string.tfcCameraParams, mCameraPreviewWidth,
                   mCameraPreviewHeight, mCameraPreviewFps);
           TextView tv = (TextView) findViewById(R.id.tfcCameraParams_text);
           tv.setText(str);

           str = getString(R.string.tfcRectSize, mRectWidth, mRectHeight);
           tv = (TextView) findViewById(R.id.tfcRectSize_text);
           tv.setText(str);

           str = getString(R.string.tfcZoomArea, mZoomWidth, mZoomHeight);
           tv = (TextView) findViewById(R.id.tfcZoomArea_text);
           tv.setText(str);
       }

       @Override
       public void onClick(View view) {

           if (recording) {
               recorder.stop();
               recording = false;

               // Let's initRecorder so we can record again
               initRecorder();
               prepareRecorder();
           } else {
               recording = true;
               recorder.start();
           }
       }


       private void initRecorder() {
           recorder.setAudioSource(MediaRecorder.AudioSource.DEFAULT);
           recorder.setVideoSource(MediaRecorder.VideoSource.DEFAULT);

           CamcorderProfile cpHigh = CamcorderProfile
                   .get(CamcorderProfile.QUALITY_HIGH);
           recorder.setProfile(cpHigh);
           String path = Environment.getExternalStorageDirectory() + File.separator
                   + Environment.DIRECTORY_DCIM + File.separator + "AlphaRun";

           recorder.setOutputFile(path);
           recorder.setMaxDuration(50000); // 50 seconds
           recorder.setMaxFileSize(5000000); // Approximately 5 megabytes

       }

       private void prepareRecorder() {


           try {
               recorder.prepare();
           } catch (IllegalStateException e) {
               e.printStackTrace();
               finish();
           } catch (IOException e) {
               e.printStackTrace();
               finish();
           }
       }




       /**
        * Thread that handles all rendering and camera operations.
        */
       private static class RenderThread extends Thread implements
               SurfaceTexture.OnFrameAvailableListener {
           // Object must be created on render thread to get correct Looper, but is used from
           // UI thread, so we need to declare it volatile to ensure the UI thread sees a fully
           // constructed object.
           private volatile RenderHandler mHandler;

           // Used to wait for the thread to start.
           private Object mStartLock = new Object();
           private boolean mReady = false;

           private MainHandler mMainHandler;

           private Camera mCamera;
           private int mCameraPreviewWidth, mCameraPreviewHeight;

           private EglCore mEglCore;
           private WindowSurface mWindowSurface;
           private int mWindowSurfaceWidth;
           private int mWindowSurfaceHeight;

           // Receives the output from the camera preview.
           private SurfaceTexture mCameraTexture;

           // Orthographic projection matrix.
           private float[] mDisplayProjectionMatrix = new float[16];

           private Texture2dProgram mTexProgram;
           private final ScaledDrawable2d mRectDrawable =
                   new ScaledDrawable2d(Drawable2d.Prefab.RECTANGLE);
           private final Sprite2d mRect = new Sprite2d(mRectDrawable);

           private int mZoomPercent = DEFAULT_ZOOM_PERCENT;
           private int mSizePercent = DEFAULT_SIZE_PERCENT;
           private int mRotatePercent = DEFAULT_ROTATE_PERCENT;
           private float mPosX, mPosY;


           /**
            * Constructor.  Pass in the MainHandler, which allows us to send stuff back to the
            * Activity.
            */
           public RenderThread(MainHandler handler) {
               mMainHandler = handler;

           }

           /**
            * Thread entry point.
            */
           @Override
           public void run() {
               Looper.prepare();

               // We need to create the Handler before reporting ready.
               mHandler = new RenderHandler(this);
               synchronized (mStartLock) {
                   mReady = true;
                   mStartLock.notify();    // signal waitUntilReady()
               }

               // Prepare EGL and open the camera before we start handling messages.
               mEglCore = new EglCore(null, 0);
               openCamera(REQ_CAMERA_WIDTH, REQ_CAMERA_HEIGHT, REQ_CAMERA_FPS);

               Looper.loop();

               Log.d(TAG, "looper quit");
               releaseCamera();
               releaseGl();
               mEglCore.release();

               synchronized (mStartLock) {
                   mReady = false;
               }
           }

           /**
            * Waits until the render thread is ready to receive messages.
            * <p>
            * Call from the UI thread.
            */
           public void waitUntilReady() {
               synchronized (mStartLock) {
                   while (!mReady) {
                       try {
                           mStartLock.wait();
                       } catch (InterruptedException ie) { /* not expected */ }
                   }
               }
           }

           /**
            * Shuts everything down.
            */
           private void shutdown() {
               Log.d(TAG, "shutdown");
               Looper.myLooper().quit();
           }

           /**
            * Returns the render thread's Handler.  This may be called from any thread.
            */
           public RenderHandler getHandler() {
               return mHandler;
           }

           /**
            * Handles the surface-created callback from SurfaceView.  Prepares GLES and the Surface.
            */
           private void surfaceAvailable(SurfaceHolder holder, boolean newSurface) {

               Surface surface = holder.getSurface();
               mWindowSurface = new WindowSurface(mEglCore, surface, false);
               mWindowSurface.makeCurrent();

               // Create and configure the SurfaceTexture, which will receive frames from the
               // camera.  We set the textured rect's program to render from it.
               mTexProgram = new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT);
               int textureId = mTexProgram.createTextureObject();
               mCameraTexture = new SurfaceTexture(textureId);
               mRect.setTexture(textureId);

               if (!newSurface) {
                   // This Surface was established on a previous run, so no surfaceChanged()
                   // message is forthcoming.  Finish the surface setup now.
                   //
                   // We could also just call this unconditionally, and perhaps do an unnecessary
                   // bit of reallocating if a surface-changed message arrives.
                   mWindowSurfaceWidth = mWindowSurface.getWidth();
                   mWindowSurfaceHeight = mWindowSurface.getWidth();
                   finishSurfaceSetup();
               }

               mCameraTexture.setOnFrameAvailableListener(this);



           }

           /**
            * Releases most of the GL resources we currently hold (anything allocated by
            * surfaceAvailable()).
            * </p><p>
            * Does not release EglCore.
            */
           private void releaseGl() {
               GlUtil.checkGlError("releaseGl start");

               if (mWindowSurface != null) {
                   mWindowSurface.release();
                   mWindowSurface = null;
               }
               if (mTexProgram != null) {
                   mTexProgram.release();
                   mTexProgram = null;
               }
               GlUtil.checkGlError("releaseGl done");

               mEglCore.makeNothingCurrent();
           }

           /**
            * Handles the surfaceChanged message.
            * </p><p>
            * We always receive surfaceChanged() after surfaceCreated(), but surfaceAvailable()
            * could also be called with a Surface created on a previous run.  So this may not
            * be called.
            */
           private void surfaceChanged(int width, int height) {
               Log.d(TAG, "RenderThread surfaceChanged " + width + "x" + height);

               mWindowSurfaceWidth = width;
               mWindowSurfaceHeight = width;
               finishSurfaceSetup();
           }

           /**
            * Handles the surfaceDestroyed message.
            */
           private void surfaceDestroyed() {
               // In practice this never appears to be called -- the activity is always paused
               // before the surface is destroyed.  In theory it could be called though.
               Log.d(TAG, "RenderThread surfaceDestroyed");
               releaseGl();
           }

           /**
            * Sets up anything that depends on the window size.
            * </p><p>
            * Open the camera (to set mCameraAspectRatio) before calling here.
            */
           private void finishSurfaceSetup() {
               int width = mWindowSurfaceWidth;
               int height = mWindowSurfaceHeight;
               Log.d(TAG, "finishSurfaceSetup size=" + width + "x" + height +
                       " camera=" + mCameraPreviewWidth + "x" + mCameraPreviewHeight);

               // Use full window.
               GLES20.glViewport(0, 700, width, height);

               // Simple orthographic projection, with (0,0) in lower-left corner.
               Matrix.orthoM(mDisplayProjectionMatrix, 0, 0, width, 0, height, -1, 1);

               // Default position is center of screen.
               mPosX = width / 2.0f;
               mPosY = height / 2.0f;

               updateGeometry();

               // Ready to go, start the camera.
               Log.d(TAG, "starting camera preview");
               try {
                   mCamera.setPreviewTexture(mCameraTexture);

               } catch (IOException ioe) {
                   throw new RuntimeException(ioe);
               }
               mCamera.startPreview();
           }

           /**
            * Updates the geometry of mRect, based on the size of the window and the current
            * values set by the UI.
            */
           private void updateGeometry() {
               int width = mWindowSurfaceWidth;
               int height = mWindowSurfaceHeight;


               int smallDim = Math.min(width, height);
               // Max scale is a bit larger than the screen, so we can show over-size.
               float scaled = smallDim * (mSizePercent / 100.0f) * 1.25f;
               float cameraAspect = (float) mCameraPreviewWidth / mCameraPreviewHeight;
               int newWidth = Math.round(scaled * cameraAspect);
               int newHeight = Math.round(scaled);

               float zoomFactor = 1.0f - (mZoomPercent / 100.0f);
               int rotAngle = Math.round(360 * (mRotatePercent / 100.0f));

               mRect.setScale(newWidth, newHeight);
               mRect.setPosition(mPosX, mPosY);
               mRect.setRotation(rotAngle);
               mRectDrawable.setScale(zoomFactor);

               mMainHandler.sendRectSize(newWidth, newHeight);
               mMainHandler.sendZoomArea(Math.round(mCameraPreviewWidth * zoomFactor),
                       Math.round(mCameraPreviewHeight * zoomFactor));
               mMainHandler.sendRotateDeg(rotAngle);
           }

           @Override   // SurfaceTexture.OnFrameAvailableListener; runs on arbitrary thread
           public void onFrameAvailable(SurfaceTexture surfaceTexture) {
               mHandler.sendFrameAvailable();
           }

           /**
            * Handles incoming frame of data from the camera.
            */
           private void frameAvailable() {
               mCameraTexture.updateTexImage();

               draw();
           }

           /**
            * Draws the scene and submits the buffer.
            */
           private void draw() {
               GlUtil.checkGlError("draw start");

               GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
               GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
               mRect.draw(mTexProgram, mDisplayProjectionMatrix);
               mWindowSurface.swapBuffers();

               GlUtil.checkGlError("draw done");
           }

           /**
            * Opens a camera, and attempts to establish preview mode at the specified width
            * and height with a fixed frame rate.
            * </p><p>
            * Sets mCameraPreviewWidth / mCameraPreviewHeight.
            */
           private void openCamera(int desiredWidth, int desiredHeight, int desiredFps) {
               if (mCamera != null) {
                   throw new RuntimeException("camera already initialized");
               }

               Camera.CameraInfo info = new Camera.CameraInfo();

               // Try to find a front-facing camera (e.g. for videoconferencing).
               int numCameras = Camera.getNumberOfCameras();
               for (int i = 0; i &lt; numCameras; i++) {
                   Camera.getCameraInfo(i, info);
                   if (info.facing == Camera.CameraInfo.CAMERA_FACING_BACK) {
                       mCamera = Camera.open(i);
                       break;
                   }
               }
               if (mCamera == null) {
                   Log.d(TAG, "No front-facing camera found; opening default");
                   mCamera = Camera.open();    // opens first back-facing camera
               }
               if (mCamera == null) {
                   throw new RuntimeException("Unable to open camera");
               }

               Camera.Parameters parms = mCamera.getParameters();

               CameraUtils.choosePreviewSize(parms, desiredWidth, desiredHeight);
               parms.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_PICTURE);
               // Try to set the frame rate to a constant value.
               int thousandFps = CameraUtils.chooseFixedPreviewFps(parms, desiredFps * 1000);

               // Give the camera a hint that we're recording video.  This can have a big
               // impact on frame rate.
               parms.setRecordingHint(true);

               mCamera.setParameters(parms);

               int[] fpsRange = new int[2];
               Camera.Size mCameraPreviewSize = parms.getPreviewSize();
               parms.getPreviewFpsRange(fpsRange);
               String previewFacts = mCameraPreviewSize.width + "x" + mCameraPreviewSize.height;
               if (fpsRange[0] == fpsRange[1]) {
                   previewFacts += " @" + (fpsRange[0] / 1000.0) + "fps";
               } else {
                   previewFacts += " @[" + (fpsRange[0] / 1000.0) +
                           " - " + (fpsRange[1] / 1000.0) + "] fps";
               }
               Log.i(TAG, "Camera config: " + previewFacts);

               mCameraPreviewWidth = mCameraPreviewSize.width;
               mCameraPreviewHeight = mCameraPreviewSize.height;
               mMainHandler.sendCameraParams(mCameraPreviewWidth, mCameraPreviewHeight,
                       thousandFps / 1000.0f);
           }

           /**
            * Stops camera preview, and releases the camera to the system.
            */
           private void releaseCamera() {
               if (mCamera != null) {
                   mCamera.stopPreview();
                   mCamera.release();
                   mCamera = null;
                   Log.d(TAG, "releaseCamera -- done");
               }
           }
       }

    }
    </p>
  • QTableWidget and QProcess - update table based on multiple process results

    9 mars 2017, par Spencer

    I have a python program that runs through a QTableWidget and for each item it runs a QProcess (an FFMPEG process to be exact). What I’m trying to do is update the "parent" cell when the process completes. Right now there is a for loop that goes through each row and launches a process for each, and connects the finished signal of that process to a "finished" function, which updates the QTableWidget cell. I’m just having trouble properly telling the function WHICH sell to update - right now I am passing it the index of the current row (seeing as it is being spawned by the for loop) but what happens is by the time the processes start to finish it will only get the last row in the table... I’m quite new to Python and PyQt so it is possible there is some fundamental thing I have wrong here !

    I tried passing the actual QTabelWidgetItem instead of the index but I got this error : "RuntimeError : wrapped C/C++ object of type QTableWidgetItem has been deleted"

    My code, the function "finished" and line #132 are the relevant ones :

    import sys, os, re
    from PyQt4 import QtGui, QtCore

    class BatchTable(QtGui.QTableWidget):
       def __init__(self, parent):
           super(BatchTable, self).__init__(parent)
           self.setAcceptDrops(True)
           self.setColumnCount(4)
           self.setColumnWidth(1,50)
           self.hideColumn(3)
           self.horizontalHeader().setStretchLastSection(True)
           self.setHorizontalHeaderLabels(QtCore.QString("Status;Alpha;File;Full Path").split(";"))

           self.doubleClicked.connect(self.removeProject)

       def removeProject(self, myItem):
           row = myItem.row()
           self.removeRow(row)

       def dragEnterEvent(self, e):
           if e.mimeData().hasFormat('text/uri-list'):
               e.accept()
           else:
               print "nope"
               e.ignore()

       def dragMoveEvent(self, e):
           e.accept()

       def dropEvent(self, e):
           if e.mimeData().hasUrls:
               for url in e.mimeData().urls():
                   chkBoxItem = QtGui.QTableWidgetItem()
                   chkBoxItem.setFlags(QtCore.Qt.ItemIsUserCheckable | QtCore.Qt.ItemIsEnabled)
                   chkBoxItem.setCheckState(QtCore.Qt.Unchecked)

                   rowPosition = self.rowCount()
                   self.insertRow(rowPosition)
                   self.setItem(rowPosition, 0, QtGui.QTableWidgetItem("Ready"))
                   self.setItem(rowPosition, 1, chkBoxItem)
                   self.setItem(rowPosition, 2, QtGui.QTableWidgetItem(os.path.split(str(url.toLocalFile()))[1]))
                   self.setItem(rowPosition, 3, QtGui.QTableWidgetItem(url.toLocalFile()))
                   self.item(rowPosition, 0).setBackgroundColor(QtGui.QColor(80, 180, 30))

    class ffmpegBatch(QtGui.QWidget):
       def __init__(self):
           super(ffmpegBatch, self).__init__()
           self.initUI()

       def initUI(self):

           self.edit = QtGui.QTextEdit()

           cmdGroup = QtGui.QGroupBox("Commandline arguments")
           fpsLbl = QtGui.QLabel("FPS:")
           self.fpsCombo = QtGui.QComboBox()
           self.fpsCombo.addItem("29.97")
           self.fpsCombo.addItem("23.976")
           hbox1 = QtGui.QHBoxLayout()
           hbox1.addWidget(fpsLbl)
           hbox1.addWidget(self.fpsCombo)
           cmdGroup.setLayout(hbox1)

           saveGroup = QtGui.QGroupBox("Output")
           self.outputLocation = QtGui.QLineEdit()
           self.browseBtn = QtGui.QPushButton("Browse")
           saveLocationBox = QtGui.QHBoxLayout()
           # Todo: add "auto-step up two folders" button
           saveLocationBox.addWidget(self.outputLocation)
           saveLocationBox.addWidget(self.browseBtn)
           saveGroup.setLayout(saveLocationBox)

           runBtn = QtGui.QPushButton("Run Batch Transcode")

           mainBox = QtGui.QVBoxLayout()
           self.table = BatchTable(self)
           # TODO: add "copy from clipboard" feature
           mainBox.addWidget(self.table)
           mainBox.addWidget(cmdGroup)
           mainBox.addWidget(saveGroup)
           mainBox.addWidget(runBtn)
           mainBox.addWidget(self.edit)

           self.setLayout(mainBox)
           self.setGeometry(300, 300, 600, 500)
           self.setWindowTitle('FFMPEG Batch Converter')

           # triggers/events
           runBtn.clicked.connect(self.run)

       def RepresentsInt(self, s):
           try:
               int(s)
               return True
           except ValueError:
               return False

       def run(self):
           if (self.outputLocation.text() == ''):
               return
           for projIndex in range(self.table.rowCount()):
               # collect some data
               ffmpeg_app = "C:\\Program Files\\ffmpeg-20150702-git-03b2b40-win64-static\\bin\\ffmpeg"
               frameRate = self.fpsCombo.currentText()
               inputFile = self.table.model().index(projIndex,3).data().toString()
               outputPath = self.outputLocation.text()
               outputPath = outputPath.replace("/", "\\")

               # format the input for ffmpeg
               # find how the exact number range, stored as 'd'
               imageName = os.path.split(str(inputFile))[1]
               imageName, imageExt = os.path.splitext(imageName)
               length = len(imageName)
               d = 0
               while (self.RepresentsInt(imageName[length-2:length-1]) == True):
                   length = length-1
                   d = d+1
               inputPath = os.path.split(str(inputFile))[0]
               inputFile = imageName[0:length-1]
               inputFile = inputPath + "/" + inputFile + "%" + str(d+1) + "d" + imageExt
               inputFile = inputFile.replace("/", "\\")

               # format the output
               outputFile = outputPath + "\\" + imageName[0:length-2] + ".mov"


               # build the commandline
               cmd = '"' + ffmpeg_app + '"' + ' -y -r ' + frameRate + ' -i ' + '"' + inputFile + '"' + ' -vcodec dnxhd -b:v 145M -vf colormatrix=bt601:bt709 -flags +ildct ' + '"' + outputFile + '"'

               # launch the process
               proc = QtCore.QProcess(self)
               proc.finished.connect(lambda: self.finished(projIndex))
               proc.setProcessChannelMode(proc.MergedChannels)
               proc.start(cmd)
               proc.readyReadStandardOutput.connect(lambda: self.readStdOutput(proc, projIndex, 100))
               self.table.setItem(projIndex, 0, QtGui.QTableWidgetItem("Running..."))
               self.table.item(projIndex, 0).setBackgroundColor(QtGui.QColor(110, 145, 30))

       def readStdOutput(self, proc, projIndex, total):
           currentLine = QtCore.QString(proc.readAllStandardOutput())
           currentLine = str(currentLine)
           frameEnd = currentLine.find("fps", 0, 15)
           if frameEnd != -1:
               m = re.search("\d", currentLine)
               if m:
                   frame = currentLine[m.start():frameEnd]
                   percent = (float(frame)/total)*100
                   print "Percent: " + str(percent)
                   self.edit.append(str(percent))
                   self.table.setItem(projIndex, 0, QtGui.QTableWidgetItem("Encoded: " + str(percent) + "%"))

       def finished(self, projIndex):
           # TODO: This isn't totally working properly for multiple processes (seems to get confused)
           print "A process completed"
           print self.sender().readAllStandardOutput()
           if self.sender().exitStatus() == 0:
               self.table.setItem(projIndex, 0, QtGui.QTableWidgetItem("Encoded"))
               self.table.item(projIndex, 0).setBackgroundColor(QtGui.QColor(45, 145, 240))


    def main():
       app = QtGui.QApplication(sys.argv)
       ex = ffmpegBatch()
       ex.show()
       sys.exit(app.exec_())

    if __name__ == '__main__':
       main()

    (And yes I do know that my percentage update is totally wrong right now, still working on that...)

  • GStreamer with MPEG-TS Video4Linux ATSC/DVB Recording

    21 juin 2013, par Dustin Oprea

    I'm having an impossible time setting up a filtergraph to read from a recording that I made from my DVB video4linux device. Any help would be vastly appreciated.

    How I made the recording :

    To tune the channel :

    azap -c ~/channels.conf "Florida"

    To record the channel :

    cat /dev/dvb/adapter0/dvr0 > /tmp/test

    This is the way that recordings must be made (I can not use any GST DVB plugins to do this for me).

    I used tstools to identify that the recording is a TS stream :

    tstools/bin$ ./stream_type ~/recordings/20130129-202049
    Reading from /home/dustin/recordings/20130129-202049
    It appears to be Transport Stream

    ...but that there are no PAT/PMT frames :

    tstools/bin$ ./tsinfo ~/recordings/20130129-202049
    Reading from /home/dustin/recordings/20130129-202049
    Scanning 10000 TS packets

    Found 0 PAT packets and 0 PMT packets in 10000 TS packets

    I was able to produce a single ES (elementary stream) stream, by running ts2es :

    tstools/bin$ ./ts2es -pid 97 ~/recordings/20130129-202049 ~/recordings/20130129-202049.es
    Reading from /home/dustin/recordings/20130129-202049
    Writing to   /home/dustin/recordings/20130129-202049.es
    Extracting packets for PID 0061 (97)
    !!! 4 bytes ignored at end of file - not enough to make a TS packet
    Extracted 219258 of 248113 TS packets

    I am able to play the ES stream (even though the video is frozen on the first frame) :

    gst-launch-0.10 filesrc location=~/recordings/20130129-202049.es ! decodebin2 ! autovideosink
    gst-launch-0.10 filesrc location=~/recordings/20130129-202049.es ! decodebin2 ! xvimagesink
    gst-launch-0.10 playbin2 uri=file:///home/dustin/recordings/20130129-202049.es

    No matter what I do, though, I can't get the original TS file to open. However, it opens in Mplayer/FFMPEG, perfectly (but not VLC). This is the output of FFMPEG :

    ffmpeg -i 20130129-202049

    ffmpeg version 0.8.5-4:0.8.5-0ubuntu0.12.04.1, Copyright (c) 2000-2012 the Libav developers
     built on Jan 24 2013 18:03:14 with gcc 4.6.3
    *** THIS PROGRAM IS DEPRECATED ***
    This program is only provided for compatibility and will be removed in a future release. Please use avconv instead.
    [mpeg2video @ 0x9be7be0] mpeg_decode_postinit() failure
       Last message repeated 4 times
    [mpegts @ 0x9be3aa0] max_analyze_duration reached
    [mpegts @ 0x9be3aa0] PES packet size mismatch
    Input #0, mpegts, from &#39;20130129-202049&#39;:
     Duration: 00:03:39.99, start: 9204.168844, bitrate: 1696 kb/s
       Stream #0.0[0x61]: Video: mpeg2video (Main), yuv420p, 528x480 [PAR 40:33 DAR 4:3], 15000 kb/s, 30.57 fps, 29.97 tbr, 90k tbn, 59.94 tbc
       Stream #0.1[0x64]: Audio: ac3, 48000 Hz, stereo, s16, 192 kb/s
    At least one output file must be specified

    This tells us that the video stream has PID 0x61 (97).

    I have been trying for a few days, so the following are only examples of a couple of attempts. I'll provide the playbin2 example first, since I know that thousands of people will respond, insisting that I just use that. It doesn't work.

    gst-launch-0.10 playbin2 uri=file:///home/dustin/recordings/20130129-202049

    Setting pipeline to PAUSED ...
    Pipeline is PREROLLING ...
    ERROR: from element /GstPlayBin2:playbin20/GstURIDecodeBin:uridecodebin0/GstDecodeBin2:decodebin20/GstMpegTSDemux:mpegtsdemux0: Could not determine type of stream.
    Additional debug info:
    gstmpegtsdemux.c(2931): gst_mpegts_demux_sink_event (): /GstPlayBin2:playbin20/GstURIDecodeBin:uridecodebin0/GstDecodeBin2:decodebin20/GstMpegTSDemux:mpegtsdemux0:
    No valid streams found at EOS
    ERROR: pipeline doesn&#39;t want to preroll.
    Setting pipeline to NULL ...
    Freeing pipeline ...

    It fails [probably] because no PID has been specified with which to find the video (the "EOS" error, I think).

    Naturally, I tried the following to start by demuxing the TS format. I believe it's the "es-pids" property that receives the PID in the absence of PMT information (which tstools said there weren't any, above), but I tried "program-number", too, just in case. gst-inspect indicates that one is hex and the other is decimal :

    gst-launch-0.10 -v filesrc location=20130129-202049 ! mpegtsdemux es-pids=0x61 ! fakesink
    gst-launch-0.10 -v filesrc location=20130129-202049 ! mpegtsdemux program-number=97 ! fakesink

    Output :

    gst-launch-0.10 filesrc location=20130129-202049 ! mpegtsdemux es-pids=0x61 ! fakesink
    Setting pipeline to PAUSED ...
    Pipeline is PREROLLING ...
    ERROR: from element /GstPipeline:pipeline0/GstMpegTSDemux:mpegtsdemux0: Could not determine type of stream.
    Additional debug info:
    gstmpegtsdemux.c(2931): gst_mpegts_demux_sink_event (): /GstPipeline:pipeline0/GstMpegTSDemux:mpegtsdemux0:
    No valid streams found at EOS
    ERROR: pipeline doesn&#39;t want to preroll.
    Setting pipeline to NULL ...
    Freeing pipeline ...
    dustin@dustinmicro:~/recordings$ gst-launch-0.10 -v filesrc location=20130129-202049 ! mpegtsdemux es-pids=0x61 ! fakesink
    Setting pipeline to PAUSED ...
    Pipeline is PREROLLING ...
    ERROR: from element /GstPipeline:pipeline0/GstMpegTSDemux:mpegtsdemux0: Could not determine type of stream.
    Additional debug info:
    gstmpegtsdemux.c(2931): gst_mpegts_demux_sink_event (): /GstPipeline:pipeline0/GstMpegTSDemux:mpegtsdemux0:
    No valid streams found at EOS
    ERROR: pipeline doesn&#39;t want to preroll.
    Setting pipeline to NULL ...
    Freeing pipeline ...
    dustin@dustinmicro:~/recordings$ gst-launch-0.10 -v filesrc location=20130129-202049 ! mpegtsdemux es-pids=0x61 ! fakesink
    Setting pipeline to PAUSED ...
    Pipeline is PREROLLING ...
    ERROR: from element /GstPipeline:pipeline0/GstMpegTSDemux:mpegtsdemux0: Could not determine type of stream.
    Additional debug info:
    gstmpegtsdemux.c(2931): gst_mpegts_demux_sink_event (): /GstPipeline:pipeline0/GstMpegTSDemux:mpegtsdemux0:
    No valid streams found at EOS
    ERROR: pipeline doesn&#39;t want to preroll.
    Setting pipeline to NULL ...
    Freeing pipeline ...

    However, when I try mpegpsdemux (for program streams (PS), as opposed to transport streams (TS)), I get further :

    gst-launch-0.10 filesrc location=20130129-202049 ! mpegpsdemux ! fakesink

    Setting pipeline to PAUSED ...
    Pipeline is PREROLLING ...

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_event_new_new_segment_full: assertion `position != -1&#39; failed

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_mini_object_ref: assertion `mini_object != NULL&#39; failed

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_pad_push_event: assertion `event != NULL&#39; failed
    Pipeline is PREROLLED ...
    Setting pipeline to PLAYING ...
    New clock: GstSystemClock

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_event_new_new_segment_full: assertion `position != -1&#39; failed

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_mini_object_ref: assertion `mini_object != NULL&#39; failed

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_pad_push_event: assertion `event != NULL&#39; failed

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_event_new_new_segment_full: assertion `position != -1&#39; failed

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_mini_object_ref: assertion `mini_object != NULL&#39; failed

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_pad_push_event: assertion `event != NULL&#39; failed

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_event_new_new_segment_full: assertion `position != -1&#39; failed

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_mini_object_ref: assertion `mini_object != NULL&#39; failed

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_pad_push_event: assertion `event != NULL&#39; failed


    ...


    Got EOS from element "pipeline0".
    Execution ended after 1654760008 ns.
    Setting pipeline to PAUSED ...
    Setting pipeline to READY ...
    Setting pipeline to NULL ...
    Freeing pipeline ...

    I'll still get the same problem whenever I use the mpegtsdemux, even if it follows mpegpsdemux, above.

    I don't understand this, since I haven't even picked a PID yet.

    What am I doing wrong ?

    Dustin