Recherche avancée

Médias (1)

Mot : - Tags -/école

Autres articles (98)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (6491)

  • H264 Encoding - Could not play video using VLC Player

    31 mars 2016, par bot1131357

    I am have trouble encoding an H264 video correctly using FFmpeg libav. I could not play the encoded video in VLC media player, and although I could play the video on MPC-HC the time shows 00:00/00:00. Clearly I’m missing something.

    The Media info from MPC-HC shows this :

    General
    Format : AVC
    Format/Info : Advanced Video Codec
    File size : 110 KiB
    Duration : 2s 400ms
    Overall bit rate : 375 Kbps
    Writing library : x264 core 148 r2665 a01e339
    Encoding settings : cabac=0 / ref=3 / deblock=1:0:0 / analyse=0x1:0x111 / me=hex / subme=7 / psy=1 / psy_rd=1.00:0.00 / mixed_ref=1 / me_range=16 / chroma_me=1 / trellis=1 / 8x8dct=0 / cqm=0 / deadzone=21,11 / fast_pskip=1 / chroma_qp_offset=-2 / threads=7 / lookahead_threads=1 / sliced_threads=0 / nr=0 / decimate=1 / interlaced=0 / bluray_compat=0 / constrained_intra=0 / bframes=0 / weightp=0 / keyint=12 / keyint_min=1 / scenecut=40 / intra_refresh=0 / rc_lookahead=12 / rc=abr / mbtree=1 / bitrate=2000 / ratetol=1.0 / qcomp=0.60 / qpmin=0 / qpmax=69 / qpstep=4 / ip_ratio=1.40 / aq=1:1.00

    Video
    Format : AVC
    Format/Info : Advanced Video Codec
    Format profile : Baseline@L2.1
    Format settings, CABAC : No
    Format settings, ReFrames : 3 frames
    Format settings, GOP : M=1, N=12
    Duration : 2s 400ms
    Bit rate : 2 000 Kbps
    Width : 320 pixels
    Height : 240 pixels
    Display aspect ratio : 4:3
    Frame rate mode : Variable
    Frame rate : 20.833 fps
    Color space : YUV
    Chroma subsampling : 4:2:0
    Bit depth : 8 bits
    Scan type : Progressive
    Bits/(Pixel*Frame) : 1.250
    Stream size : 586 KiB
    Writing library : x264 core 148 r2665 a01e339
    Encoding settings : cabac=0 / ref=3 / deblock=1:0:0 / analyse=0x1:0x111 / me=hex / subme=7 / psy=1 / psy_rd=1.00:0.00 / mixed_ref=1 / me_range=16 / chroma_me=1 / trellis=1 / 8x8dct=0 / cqm=0 / deadzone=21,11 / fast_pskip=1 / chroma_qp_offset=-2 / threads=7 / lookahead_threads=1 / sliced_threads=0 / nr=0 / decimate=1 / interlaced=0 / bluray_compat=0 / constrained_intra=0 / bframes=0 / weightp=0 / keyint=12 / keyint_min=1 / scenecut=40 / intra_refresh=0 / rc_lookahead=12 / rc=abr / mbtree=1 / bitrate=2000 / ratetol=1.0 / qcomp=0.60 / qpmin=0 / qpmax=69 / qpstep=4 / ip_ratio=1.40 / aq=1:1.00

    I noticed something odd in the above info :
    - The frame rate is 20.833 fps, instead of the specified 10 fps.
    - Duration of 2s 400ms did not seem right either, since the video played for more than 4s.

    Also, (AVFrame* picture)->pict_type is always set to AV_PICTURE_TYPE_NONE. I don’t think this is normal.

    The library that I’m using is ffmpeg-20160219-git-98a0053-win32-dev. I would really really appreciate if you could help me out of this confusion.

    /*
    * Video encoding example
    */
    char filename[] = "test.mp4";
    int main(int argc, char** argv)
    {
       AVCodec *codec = NULL;
       AVCodecContext *codecCtx= NULL;
       AVFormatContext *pFormatCtx = NULL;
       AVStream * pVideoStream = NULL;
       AVFrame *picture = NULL;

       int i, x, y,            //
           ret,                // Return value
           got_packet_ptr;     // Data encoded into packet

       printf("Video encoding\n");

       // Register all formats and codecs
       av_register_all();

       // allocate context
       pFormatCtx = avformat_alloc_context();
       memcpy(pFormatCtx->filename,filename,
           min(strlen(filename), sizeof(pFormatCtx->filename)));

       // guess format
       pFormatCtx->oformat = av_guess_format("h264", NULL, NULL);
       if (NULL==pFormatCtx->oformat)
       {
           cerr << "Could not guess output format" << endl;
           return -1;
       }  

       // Find the codec.
       codec = avcodec_find_encoder(pFormatCtx->oformat->video_codec);
       if (codec == NULL) {
           fprintf(stderr, "Codec not found\n");
           return -1;
       }

       // Set context
       int framerate = 10;
       codecCtx = avcodec_alloc_context3(codec);
       avcodec_get_context_defaults3(codecCtx, codec);
       codecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
       codecCtx->profile = FF_PROFILE_H264_BASELINE;
       // Resolution must be a multiple of two.
       codecCtx->width  = 320;
       codecCtx->height = 240;

       codecCtx->bit_rate = 2000000;
       codecCtx->time_base.den = framerate;
       codecCtx->time_base.num = 1;
       codecCtx->gop_size = 12; // emit one intra frame every twelve frames at most

       // Open the codec.  
       if (avcodec_open2(codecCtx, codec, NULL) < 0)
       {
           printf("Cannot open video codec\n");
           return -1;
       }

       // Add stream to pFormatCtx
       pVideoStream = avformat_new_stream(pFormatCtx, codec);
       if (!pVideoStream)
       {
           printf("Cannot add new video stream\n");
           return -1;
       }
       pVideoStream->codec = codecCtx;
       pVideoStream->time_base.den = framerate;
       pVideoStream->time_base.num = 1;

       if (avio_open2(&pFormatCtx->pb, filename, AVIO_FLAG_WRITE, NULL, NULL) < 0)
       {
           printf("Cannot open file\n");
           return -1;
       }

       // Write file header.
       avformat_write_header(pFormatCtx, NULL);

       // Create frame
       picture= av_frame_alloc();
       picture->format = codecCtx->pix_fmt;
       picture->width  = codecCtx->width;
       picture->height = codecCtx->height;

       int bufferImgSize = av_image_get_buffer_size(codecCtx->pix_fmt, codecCtx->width,
                       codecCtx->height,1);    
       av_image_alloc(picture->data, picture->linesize, codecCtx->width, codecCtx->height,                 codecCtx->pix_fmt, 32);

       AVPacket avpkt;

       /* encode 1 second of video */
       for(i=0;i<50;i++)
       {
           /* prepare a dummy image */
           /* Y */
           for(y=0;yheight;y++)
           {
               for(x=0;xwidth;x++)
               {
                   picture->data[0][y * picture->linesize[0] + x] = x + y + i * 3;
               }
           }
           /* Cb and Cr */
           for(y=0;yheight/2;y++)
           {
               for(x=0;xwidth/2;x++)
               {
                   picture->data[1][y * picture->linesize[1] + x] = 128 + y + i * 2;
                   picture->data[2][y * picture->linesize[2] + x] = 64 + x + i * 5;
               }
           }

           // Get timestamp
           picture->pts = (float) i * (1000.0/(float)(codecCtx->time_base.den)) * 90;

           // Encode frame to packet
           av_init_packet(&avpkt);
           got_packet_ptr = 0;
           int error = avcodec_encode_video2(codecCtx, &avpkt, picture, &got_packet_ptr);
           if (!error && got_packet_ptr > 0)
           {
               // Write packet with frame.
               ret = (av_interleaved_write_frame(pFormatCtx, &avpkt) == 0);        
           }  
           av_packet_unref(&avpkt);
       }

       // Flush remaining encoded data
       while(1)
       {
           av_init_packet(&avpkt);
           got_packet_ptr = 0;
           // Encode frame to packet.
           int error = avcodec_encode_video2(codecCtx, &avpkt, NULL, &got_packet_ptr);
           if (!error && got_packet_ptr > 0)
           {
               // Write packet with frame.
               ret = (av_interleaved_write_frame(pFormatCtx, &avpkt) == 0);        
           }
           else
           {
               break;
           }
           av_packet_unref(&avpkt);
       }
       av_write_trailer(pFormatCtx);

       av_packet_unref(&avpkt);
       av_frame_free(&picture);

       avcodec_close(codecCtx);
       av_free(codecCtx);

       cin.get();
    }
  • squeeze image while capturing video with FFmpegFrameRecorder

    12 avril 2016, par Saty

    I am trying to stream video with FFmpegFrameRecorder using javacv. All are working great except I find videos which are actually images are bit squeeze from the height.

    I am using the below code which half of the internet community is using to live stream

    public class MainActivity extends Activity implements OnClickListener {

    private final static String LOG_TAG = "MainActivity";

    private PowerManager.WakeLock mWakeLock;

    private String ffmpeg_link = "rtmp://username:password@xxx.xxx.xxx.xxx:1935/live/test.flv";
    //private String ffmpeg_link = "/mnt/sdcard/new_stream.flv";

    private volatile FFmpegFrameRecorder recorder;
    boolean recording = false;
    long startTime = 0;

    private int sampleAudioRateInHz = 44100;
    private int imageWidth = 320;
    private int imageHeight = 240;
    private int frameRate = 30;

    private Thread audioThread;
    volatile boolean runAudioThread = true;
    private AudioRecord audioRecord;
    private AudioRecordRunnable audioRecordRunnable;

    private CameraView cameraView;
    private IplImage yuvIplimage = null;

    private Button recordButton;
    private LinearLayout mainLayout;

    @Override
    public void onCreate(Bundle savedInstanceState) {
       super.onCreate(savedInstanceState);

       setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE);
       setContentView(R.layout.activity_main);

       initLayout();
       initRecorder();
    }

    @Override
    protected void onResume() {
       super.onResume();

       if (mWakeLock == null) {
           PowerManager pm = (PowerManager) getSystemService(Context.POWER_SERVICE);
           mWakeLock = pm.newWakeLock(PowerManager.SCREEN_BRIGHT_WAKE_LOCK, LOG_TAG);
           mWakeLock.acquire();
       }
    }

    @Override
    protected void onPause() {
       super.onPause();

       if (mWakeLock != null) {
           mWakeLock.release();
           mWakeLock = null;
       }
    }

    @Override
    protected void onDestroy() {
       super.onDestroy();

       recording = false;
    }


    private void initLayout() {

       mainLayout = (LinearLayout) this.findViewById(R.id.record_layout);

       recordButton = (Button) findViewById(R.id.recorder_control);
       recordButton.setText("Start");
       recordButton.setOnClickListener(this);

       cameraView = new CameraView(this);

       LinearLayout.LayoutParams layoutParam = new LinearLayout.LayoutParams(imageWidth, imageHeight);        
       mainLayout.addView(cameraView, layoutParam);
       Log.v(LOG_TAG, "added cameraView to mainLayout");
    }

    private void initRecorder() {
       Log.w(LOG_TAG,"initRecorder");

       if (yuvIplimage == null) {
           // Recreated after frame size is set in surface change method
           yuvIplimage = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_8U, 2);
           //yuvIplimage = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_32S, 2);

           Log.v(LOG_TAG, "IplImage.create");
       }

       recorder = new FFmpegFrameRecorder(ffmpeg_link, imageWidth, imageHeight, 1);
       Log.v(LOG_TAG, "FFmpegFrameRecorder: " + ffmpeg_link + " imageWidth: " + imageWidth + " imageHeight " + imageHeight);

       recorder.setFormat("flv");
       Log.v(LOG_TAG, "recorder.setFormat(\"flv\")");

       recorder.setSampleRate(sampleAudioRateInHz);
       Log.v(LOG_TAG, "recorder.setSampleRate(sampleAudioRateInHz)");

       // re-set in the surface changed method as well
       recorder.setFrameRate(frameRate);
       Log.v(LOG_TAG, "recorder.setFrameRate(frameRate)");

       // Create audio recording thread
       audioRecordRunnable = new AudioRecordRunnable();
       audioThread = new Thread(audioRecordRunnable);
    }

    // Start the capture
    public void startRecording() {
       try {
           recorder.start();
           startTime = System.currentTimeMillis();
           recording = true;
           audioThread.start();
       } catch (FFmpegFrameRecorder.Exception e) {
           e.printStackTrace();
       }
    }

    public void stopRecording() {
       // This should stop the audio thread from running
       runAudioThread = false;

       if (recorder != null && recording) {
           recording = false;
           Log.v(LOG_TAG,"Finishing recording, calling stop and release on recorder");
           try {
               recorder.stop();
               recorder.release();
           } catch (FFmpegFrameRecorder.Exception e) {
               e.printStackTrace();
           }
           recorder = null;
       }
    }

    @Override
    public boolean onKeyDown(int keyCode, KeyEvent event) {
       // Quit when back button is pushed
       if (keyCode == KeyEvent.KEYCODE_BACK) {
           if (recording) {
               stopRecording();
           }
           finish();
           return true;
       }
       return super.onKeyDown(keyCode, event);
    }

    @Override
    public void onClick(View v) {
       if (!recording) {
           startRecording();
           Log.w(LOG_TAG, "Start Button Pushed");
           recordButton.setText("Stop");
       } else {
           stopRecording();
           Log.w(LOG_TAG, "Stop Button Pushed");
           recordButton.setText("Start");
       }
    }

    //---------------------------------------------
    // audio thread, gets and encodes audio data
    //---------------------------------------------
    class AudioRecordRunnable implements Runnable {

       @Override
       public void run() {
           // Set the thread priority
           android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);

           // Audio
           int bufferSize;
           short[] audioData;
           int bufferReadResult;

           bufferSize = AudioRecord.getMinBufferSize(sampleAudioRateInHz,
                   AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
           audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, sampleAudioRateInHz,
                   AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize);

           audioData = new short[bufferSize];

           Log.d(LOG_TAG, "audioRecord.startRecording()");
           audioRecord.startRecording();

           // Audio Capture/Encoding Loop
           while (runAudioThread) {
               // Read from audioRecord
               bufferReadResult = audioRecord.read(audioData, 0, audioData.length);
               if (bufferReadResult > 0) {
                   //Log.v(LOG_TAG,"audioRecord bufferReadResult: " + bufferReadResult);

                   // Changes in this variable may not be picked up despite it being "volatile"
                   if (recording) {
                       try {
                           // Write to FFmpegFrameRecorder
                           Buffer[] buffer = {ShortBuffer.wrap(audioData, 0, bufferReadResult)};                        
                           recorder.record(buffer);
                       } catch (FFmpegFrameRecorder.Exception e) {
                           Log.v(LOG_TAG,e.getMessage());
                           e.printStackTrace();
                       }
                   }
               }
           }
           Log.v(LOG_TAG,"AudioThread Finished");

           /* Capture/Encoding finished, release recorder */
           if (audioRecord != null) {
               audioRecord.stop();
               audioRecord.release();
               audioRecord = null;
               Log.v(LOG_TAG,"audioRecord released");
           }
       }
    }

    class CameraView extends SurfaceView implements SurfaceHolder.Callback, PreviewCallback {

       private boolean previewRunning = false;

       private SurfaceHolder holder;
       private Camera camera;

       private byte[] previewBuffer;

       long videoTimestamp = 0;

       Bitmap bitmap;
       Canvas canvas;

       public CameraView(Context _context) {
           super(_context);

           holder = this.getHolder();
           holder.addCallback(this);
           holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
       }

       @Override
       public void surfaceCreated(SurfaceHolder holder) {
           camera = Camera.open();

           try {
               camera.setPreviewDisplay(holder);
               camera.setPreviewCallback(this);

               Camera.Parameters currentParams = camera.getParameters();
               Log.v(LOG_TAG,"Preview Framerate: " + currentParams.getPreviewFrameRate());
               Log.v(LOG_TAG,"Preview imageWidth: " + currentParams.getPreviewSize().width + " imageHeight: " + currentParams.getPreviewSize().height);

               // Use these values
               imageWidth = currentParams.getPreviewSize().width;
               imageHeight = currentParams.getPreviewSize().height;
               frameRate = currentParams.getPreviewFrameRate();                

               bitmap = Bitmap.createBitmap(imageWidth, imageHeight, Bitmap.Config.ALPHA_8);


               /*
               Log.v(LOG_TAG,"Creating previewBuffer size: " + imageWidth * imageHeight * ImageFormat.getBitsPerPixel(currentParams.getPreviewFormat())/8);
               previewBuffer = new byte[imageWidth * imageHeight * ImageFormat.getBitsPerPixel(currentParams.getPreviewFormat())/8];
               camera.addCallbackBuffer(previewBuffer);
               camera.setPreviewCallbackWithBuffer(this);
               */              

               camera.startPreview();
               previewRunning = true;
           }
           catch (IOException e) {
               Log.v(LOG_TAG,e.getMessage());
               e.printStackTrace();
           }  
       }

       public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
           Log.v(LOG_TAG,"Surface Changed: width " + width + " height: " + height);

           // We would do this if we want to reset the camera parameters
           /*
           if (!recording) {
               if (previewRunning){
                   camera.stopPreview();
               }
               try {
                   //Camera.Parameters cameraParameters = camera.getParameters();
                   //p.setPreviewSize(imageWidth, imageHeight);
                   //p.setPreviewFrameRate(frameRate);
                   //camera.setParameters(cameraParameters);

                   camera.setPreviewDisplay(holder);
                   camera.startPreview();
                   previewRunning = true;
               }
               catch (IOException e) {
                   Log.e(LOG_TAG,e.getMessage());
                   e.printStackTrace();
               }  
           }            
           */

           // Get the current parameters
           Camera.Parameters currentParams = camera.getParameters();
           Log.v(LOG_TAG,"Preview Framerate: " + currentParams.getPreviewFrameRate());
           Log.v(LOG_TAG,"Preview imageWidth: " + currentParams.getPreviewSize().width + " imageHeight: " + currentParams.getPreviewSize().height);

           // Use these values
           imageWidth = currentParams.getPreviewSize().width;
           imageHeight = currentParams.getPreviewSize().height;
           frameRate = currentParams.getPreviewFrameRate();

           // Create the yuvIplimage if needed
           yuvIplimage = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_8U, 2);
           //yuvIplimage = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_32S, 2);
       }

       @Override
       public void surfaceDestroyed(SurfaceHolder holder) {
           try {
               camera.setPreviewCallback(null);

               previewRunning = false;
               camera.release();

           } catch (RuntimeException e) {
               Log.v(LOG_TAG,e.getMessage());
               e.printStackTrace();
           }
       }

       @Override
       public void onPreviewFrame(byte[] data, Camera camera) {

           if (yuvIplimage != null && recording) {
               videoTimestamp = 1000 * (System.currentTimeMillis() - startTime);

               // Put the camera preview frame right into the yuvIplimage object
               yuvIplimage.getByteBuffer().put(data);

               // FAQ about IplImage:
               // - For custom raw processing of data, getByteBuffer() returns an NIO direct
               //   buffer wrapped around the memory pointed by imageData, and under Android we can
               //   also use that Buffer with Bitmap.copyPixelsFromBuffer() and copyPixelsToBuffer().
               // - To get a BufferedImage from an IplImage, we may call getBufferedImage().
               // - The createFrom() factory method can construct an IplImage from a BufferedImage.
               // - There are also a few copy*() methods for BufferedImage<->IplImage data transfers.

               // Let's try it..
               // This works but only on transparency
               // Need to find the right Bitmap and IplImage matching types

               /*
               bitmap.copyPixelsFromBuffer(yuvIplimage.getByteBuffer());
               //bitmap.setPixel(10,10,Color.MAGENTA);

               canvas = new Canvas(bitmap);
               Paint paint = new Paint();
               paint.setColor(Color.GREEN);
               float leftx = 20;
               float topy = 20;
               float rightx = 50;
               float bottomy = 100;
               RectF rectangle = new RectF(leftx,topy,rightx,bottomy);
               canvas.drawRect(rectangle, paint);

               bitmap.copyPixelsToBuffer(yuvIplimage.getByteBuffer());
               */
               //Log.v(LOG_TAG,"Writing Frame");

               try {

                   // Get the correct time
                   recorder.setTimestamp(videoTimestamp);

                   // Record the image into FFmpegFrameRecorder
                   recorder.record(yuvIplimage);

               } catch (FFmpegFrameRecorder.Exception e) {
                   Log.v(LOG_TAG,e.getMessage());
                   e.printStackTrace();
               }
           }
       }
    }

    }

  • Fix min/max validation. Closes gh-666. Fixes #648

    20 mars 2013, par ekonijn
    Fix min/max validation. Closes gh-666. Fixes #648
    

    In 1.10.0, min/max validation was supported for input type="text",
    where min/max were interpreted as numbers. This means min/max
    for date would not work : min="2012-02-13" was interpreted as min="Not a Number".

    In 1.11.0, min/max were no longer converted to numbers. This means
    min/max for dates worked, but min/max for numbers failed :
    "50" < "150" < "1000" does not hold.

    For an example, see http://jsbin.com/awokex/3

    This commit makes the behaviour of min/max dependent on input type :

    * input type=text (or not type attribute) has numeric min/max, as in 1.10.0
    * input type=date has working min/max for type date ;
    on mobile browsers you also get a date picker,
    plus the browser may reject invalid dates before
    javascript gets a chance to complain.
    * input type=number or range get numeric min/max,
    plus numeric keypad or slider on mobile browsers,
    plus browser may reject invalid input before javascript
    gets a chance to complain

    Allowing use of min/max with type=number/range/date is important
    for mobile browsers, where the numeric keypad or date picker
    make the input much easier to use than a generic text input field.
    In this situation jquery-validate remains necessary to support
    older browsers that do not do input validation based on type
    and min/max.

    For situations where numeric input should be validated by jquery
    without giving the browser a chance to validate the input format,
    input type=text in combination with min/max can be used, as in 1.10.0.