Recherche avancée

Médias (1)

Mot : - Tags -/framasoft

Autres articles (53)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (10744)

  • Firefox doesn't play mp4 file in HTML5 video

    21 janvier 2016, par Developer

    I recorded a video with my mobile which had the format ".mp4". Now if I load that video in HTML5 video tag, I get an error HTTP "Content-Type" of "video/3gpp" is not supported. Why does Firefox consider the file as 3gpp although it is an mp4 file ?
    If I log the file properties when it is loaded on to browser, I see the following

    { name: "test.mp4", lastModified: 1434536249000, lastModifiedDate: Date 2015-06-17T10:17:29.000Z, size: 41151959, type: "video/mp4" }

    This means firefox identifies the type as mp4 only. But doesn’t play it giving the error HTTP "Content-Type" of "video/3gpp" is not supported.

  • Screenrecorder application output video resolution issues [closed]

    23 juin 2022, par JessieK

    Using Github code for ScreenRecorder on Linux
Everything works fine, besides the resolution of output video.
Tried to play with setting, quality has significantly improved, but still no way to change resolution.
I need to get output video with the same size as input video

    


    using namespace std;

    /* initialize the resources*/
    ScreenRecorder::ScreenRecorder()
    {
    
        av_register_all();
        avcodec_register_all();
        avdevice_register_all();
        cout<<"\nall required functions are registered successfully";
    }
    
    /* uninitialize the resources */
    ScreenRecorder::~ScreenRecorder()
    {
    
        avformat_close_input(&pAVFormatContext);
        if( !pAVFormatContext )
        {
            cout<<"\nfile closed sucessfully";
        }
        else
        {
            cout<<"\nunable to close the file";
            exit(1);
        }
    
        avformat_free_context(pAVFormatContext);
        if( !pAVFormatContext )
        {
            cout<<"\navformat free successfully";
        }
        else
        {
            cout<<"\nunable to free avformat context";
            exit(1);
        }
    
    }
    
    /* function to capture and store data in frames by allocating required memory and auto deallocating the memory.   */
    int ScreenRecorder::CaptureVideoFrames()
    {
        int flag;
        int frameFinished;//when you decode a single packet, you still don't have information enough to have a frame [depending on the type of codec, some of them //you do], when you decode a GROUP of packets that represents a frame, then you have a picture! that's why frameFinished will let //you know you decoded enough to have a frame.
    
        int frame_index = 0;
        value = 0;
    
        pAVPacket = (AVPacket *)av_malloc(sizeof(AVPacket));
        av_init_packet(pAVPacket);
    
        pAVFrame = av_frame_alloc();
        if( !pAVFrame )
        {
         cout<<"\nunable to release the avframe resources";
         exit(1);
        }
    
        outFrame = av_frame_alloc();//Allocate an AVFrame and set its fields to default values.
        if( !outFrame )
        {
         cout<<"\nunable to release the avframe resources for outframe";
         exit(1);
        }
    
        int video_outbuf_size;
        int nbytes = av_image_get_buffer_size(outAVCodecContext->pix_fmt,outAVCodecContext->width,outAVCodecContext->height,32);
        uint8_t *video_outbuf = (uint8_t*)av_malloc(nbytes);
        if( video_outbuf == NULL )
        {
            cout<<"\nunable to allocate memory";
            exit(1);
        }
    
        // Setup the data pointers and linesizes based on the specified image parameters and the provided array.
        value = av_image_fill_arrays( outFrame->data, outFrame->linesize, video_outbuf , AV_PIX_FMT_YUV420P, outAVCodecContext->width,outAVCodecContext->height,1 ); // returns : the size in bytes required for src
        if(value < 0)
        {
            cout<<"\nerror in filling image array";
        }
    
        SwsContext* swsCtx_ ;
    
        // Allocate and return swsContext.
        // a pointer to an allocated context, or NULL in case of error
        // Deprecated : Use sws_getCachedContext() instead.
        swsCtx_ = sws_getContext(pAVCodecContext->width,
                            pAVCodecContext->height,
                            pAVCodecContext->pix_fmt,
                            outAVCodecContext->width,
                    outAVCodecContext->height,
                            outAVCodecContext->pix_fmt,
                            SWS_BICUBIC, NULL, NULL, NULL);
    
    
    int ii = 0;
    int no_frames = 100;
    cout<<"\nenter No. of frames to capture : ";
    cin>>no_frames;
    
        AVPacket outPacket;
        int j = 0;
    
        int got_picture;
    
        while( av_read_frame( pAVFormatContext , pAVPacket ) >= 0 )
        {
        if( ii++ == no_frames )break;
            if(pAVPacket->stream_index == VideoStreamIndx)
            {
                value = avcodec_decode_video2( pAVCodecContext , pAVFrame , &frameFinished , pAVPacket );
                if( value < 0)
                {
                    cout<<"unable to decode video";
                }
    
                if(frameFinished)// Frame successfully decoded :)
                {
                    sws_scale(swsCtx_, pAVFrame->data, pAVFrame->linesize,0, pAVCodecContext->height, outFrame->data,outFrame->linesize);
                    av_init_packet(&outPacket);
                    outPacket.data = NULL;    // packet data will be allocated by the encoder
                    outPacket.size = 0;
    
                    avcodec_encode_video2(outAVCodecContext , &outPacket ,outFrame , &got_picture);
    
                    if(got_picture)
                    {
                        if(outPacket.pts != AV_NOPTS_VALUE)
                            outPacket.pts = av_rescale_q(outPacket.pts, video_st->codec->time_base, video_st->time_base);
                        if(outPacket.dts != AV_NOPTS_VALUE)
                            outPacket.dts = av_rescale_q(outPacket.dts, video_st->codec->time_base, video_st->time_base);
                    
                        printf("Write frame %3d (size= %2d)\n", j++, outPacket.size/1000);
                        if(av_write_frame(outAVFormatContext , &outPacket) != 0)
                        {
                            cout<<"\nerror in writing video frame";
                        }
    
                    av_packet_unref(&outPacket);
                    } // got_picture
    
                av_packet_unref(&outPacket);
                } // frameFinished
    
            }
        }// End of while-loop


    


    One part of two parts is above...Actually original app seem to record video of same size as does my application, but still it has not any use

    



    


    Second part of the code

    


    av_free(video_outbuf);

}

/* establishing the connection between camera or screen through its respective folder */
int ScreenRecorder::openCamera()
{

    value = 0;
    options = NULL;
    pAVFormatContext = NULL;

    pAVFormatContext = avformat_alloc_context();//Allocate an AVFormatContext.
/*

X11 video input device.
To enable this input device during configuration you need libxcb installed on your system. It will be automatically detected during configuration.
This device allows one to capture a region of an X11 display. 
refer : https://www.ffmpeg.org/ffmpeg-devices.html#x11grab
*/
    /* current below is for screen recording. to connect with camera use v4l2 as a input parameter for av_find_input_format */ 
    pAVInputFormat = av_find_input_format("x11grab");
    value = avformat_open_input(&pAVFormatContext, ":0.0+10,250", pAVInputFormat, NULL);
    if(value != 0)
    {
       cout<<"\nerror in opening input device";
       exit(1);
    }

    /* set frame per second */
    value = av_dict_set( &options,"framerate","30",0 );
    if(value < 0)
    {
      cout<<"\nerror in setting dictionary value";
       exit(1);
    }

    value = av_dict_set( &options, "preset", "medium", 0 );
    if(value < 0)
    {
      cout<<"\nerror in setting preset values";
      exit(1);
    }

//  value = avformat_find_stream_info(pAVFormatContext,NULL);
    if(value < 0)
    {
      cout<<"\nunable to find the stream information";
      exit(1);
    }

    VideoStreamIndx = -1;

    /* find the first video stream index . Also there is an API available to do the below operations */
    for(int i = 0; i < pAVFormatContext->nb_streams; i++ ) // find video stream posistion/index.
    {
      if( pAVFormatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO )
      {
         VideoStreamIndx = i;
         break;
      }

    } 

    if( VideoStreamIndx == -1)
    {
      cout<<"\nunable to find the video stream index. (-1)";
      exit(1);
    }

    // assign pAVFormatContext to VideoStreamIndx
    pAVCodecContext = pAVFormatContext->streams[VideoStreamIndx]->codec;

    pAVCodec = avcodec_find_decoder(pAVCodecContext->codec_id);
    if( pAVCodec == NULL )
    {
      cout<<"\nunable to find the decoder";
      exit(1);
    }

    value = avcodec_open2(pAVCodecContext , pAVCodec , NULL);//Initialize the AVCodecContext to use the given AVCodec.
    if( value < 0 )
    {
      cout<<"\nunable to open the av codec";
      exit(1);
    }
}

/* initialize the video output file and its properties  */
int ScreenRecorder::init_outputfile()
{
    outAVFormatContext = NULL;
    value = 0;
    output_file = "../media/output.mp4";

    avformat_alloc_output_context2(&outAVFormatContext, NULL, NULL, output_file);
    if (!outAVFormatContext)
    {
        cout<<"\nerror in allocating av format output context";
        exit(1);
    }

/* Returns the output format in the list of registered output formats which best matches the provided parameters, or returns NULL if there is no match. */
    output_format = av_guess_format(NULL, output_file ,NULL);
    if( !output_format )
    {
     cout<<"\nerror in guessing the video format. try with correct format";
     exit(1);
    }

    video_st = avformat_new_stream(outAVFormatContext ,NULL);
    if( !video_st )
    {
        cout<<"\nerror in creating a av format new stream";
        exit(1);
    }

    outAVCodecContext = avcodec_alloc_context3(outAVCodec);
    if( !outAVCodecContext )
    {
        cout<<"\nerror in allocating the codec contexts";
        exit(1);
    }

    /* set property of the video file */
    outAVCodecContext = video_st->codec;
    outAVCodecContext->codec_id = AV_CODEC_ID_MPEG4;// AV_CODEC_ID_MPEG4; // AV_CODEC_ID_H264 // AV_CODEC_ID_MPEG1VIDEO
    outAVCodecContext->codec_type = AVMEDIA_TYPE_VIDEO;
    outAVCodecContext->pix_fmt  = AV_PIX_FMT_YUV420P;
    outAVCodecContext->bit_rate = 2500000; // 2500000
    outAVCodecContext->width = 1920;
    outAVCodecContext->height = 1080;
    outAVCodecContext->gop_size = 3;
    outAVCodecContext->max_b_frames = 2;
    outAVCodecContext->time_base.num = 1;
    outAVCodecContext->time_base.den = 30; // 15fps

    {
     av_opt_set(outAVCodecContext->priv_data, "preset", "slow", 0);
    }

    outAVCodec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);
    if( !outAVCodec )
    {
     cout<<"\nerror in finding the av codecs. try again with correct codec";
    exit(1);
    }

    /* Some container formats (like MP4) require global headers to be present
       Mark the encoder so that it behaves accordingly. */

    if ( outAVFormatContext->oformat->flags & AVFMT_GLOBALHEADER)
    {
        outAVCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
    }

    value = avcodec_open2(outAVCodecContext, outAVCodec, NULL);
    if( value < 0)
    {
        cout<<"\nerror in opening the avcodec";
        exit(1);
    }

    /* create empty video file */
    if ( !(outAVFormatContext->flags & AVFMT_NOFILE) )
    {
     if( avio_open2(&outAVFormatContext->pb , output_file , AVIO_FLAG_WRITE ,NULL, NULL) < 0 )
     {
      cout<<"\nerror in creating the video file";
      exit(1);
     }
    }

    if(!outAVFormatContext->nb_streams)
    {
        cout<<"\noutput file dose not contain any stream";
        exit(1);
    }

    /* imp: mp4 container or some advanced container file required header information*/
    value = avformat_write_header(outAVFormatContext , &options);
    if(value < 0)
    {
        cout<<"\nerror in writing the header context";
        exit(1);
    }


    cout<<"\n\nOutput file information :\n\n";
    av_dump_format(outAVFormatContext , 0 ,output_file ,1);


    


    Github link https://github.com/abdullahfarwees/screen-recorder-ffmpeg-cpp

    


  • Camera app fails on android ffmpeg application

    22 mars 2021, par connor449

    I am trying to run a simple video recorder app on android. The code is below :

    


    package com.example.camera&#xA;&#xA;//import android.R&#xA;import android.content.DialogInterface&#xA;import android.content.pm.PackageManager&#xA;import android.os.Build&#xA;import android.os.Bundle&#xA;import android.widget.Toast&#xA;import androidx.appcompat.app.AlertDialog&#xA;import androidx.appcompat.app.AppCompatActivity&#xA;import androidx.core.app.ActivityCompat&#xA;import androidx.core.content.ContextCompat&#xA;import com.arthenica.mobileffmpeg.FFmpeg&#xA;&#xA;&#xA;const val EXTRA_MESSAGE = "com.example.myfirstapp.MESSAGE"&#xA;&#xA;class MainActivity : AppCompatActivity() {&#xA;    override fun onCreate(savedInstanceState: Bundle?) {&#xA;        super.onCreate(savedInstanceState)&#xA;        setContentView(R.layout.activity_main)&#xA;        if (checkPermission()) {&#xA;            //main logic or main code&#xA;           FFmpeg.execute("-f android_camera -i 0:0 -r 30 -pixel_format bgr0 -t 00:00:05 /sdcard/test.mp4")&#xA;&#xA;            // . write your main code to execute, It will execute if the permission is already given.&#xA;        } else {&#xA;            requestPermission()&#xA;        }&#xA;    }&#xA;&#xA;    private fun checkPermission(): Boolean {&#xA;        return if (ContextCompat.checkSelfPermission(this, android.Manifest.permission.CAMERA)&#xA;            != PackageManager.PERMISSION_GRANTED&#xA;        ) {&#xA;            // Permission is not granted&#xA;            false&#xA;        } else true&#xA;    }&#xA;&#xA;    private fun requestPermission() {&#xA;        ActivityCompat.requestPermissions(&#xA;            this, arrayOf(android.Manifest.permission.CAMERA),&#xA;            PERMISSION_REQUEST_CODE&#xA;        )&#xA;    }&#xA;&#xA;    override fun onRequestPermissionsResult(&#xA;        requestCode: Int,&#xA;        permissions: Array<string>,&#xA;        grantResults: IntArray&#xA;    ) {&#xA;        when (requestCode) {&#xA;            PERMISSION_REQUEST_CODE -> if (grantResults.size > 0 &amp;&amp; grantResults[0] == PackageManager.PERMISSION_GRANTED&#xA;            ) {&#xA;                Toast.makeText(applicationContext, "Permission Granted", Toast.LENGTH_SHORT)&#xA;                    .show()&#xA;&#xA;                // main logic&#xA;            } else {&#xA;                Toast.makeText(applicationContext, "Permission Denied", Toast.LENGTH_SHORT)&#xA;                    .show()&#xA;                if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {&#xA;                    if (ContextCompat.checkSelfPermission(this, android.Manifest.permission.CAMERA)&#xA;                        != PackageManager.PERMISSION_GRANTED&#xA;                    ) {&#xA;                        showMessageOKCancel("You need to allow access permissions",&#xA;                            DialogInterface.OnClickListener { dialog, which ->&#xA;                                if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {&#xA;                                    requestPermission()&#xA;                                }&#xA;                            })&#xA;                    }&#xA;                }&#xA;            }&#xA;        }&#xA;    }&#xA;&#xA;    private fun showMessageOKCancel(&#xA;        message: String,&#xA;        okListener: DialogInterface.OnClickListener&#xA;    ) {&#xA;        AlertDialog.Builder(this@MainActivity)&#xA;            .setMessage(message)&#xA;            .setPositiveButton("OK", okListener)&#xA;            .setNegativeButton("Cancel", null)&#xA;            .create()&#xA;            .show()&#xA;    }&#xA;&#xA;    companion object {&#xA;        private const val PERMISSION_REQUEST_CODE = 200&#xA;    }&#xA;}&#xA;&#xA;&#xA;</string>

    &#xA;

    The main command to call the video recorder is here :

    &#xA;

               FFmpeg.execute("-f android_camera -i 0:0 -r 30 -pixel_format bgr0 -t 00:00:05 /sdcard/test.mp4")&#xA;

    &#xA;

    The app opens on my android 10 Motorola G Power. I tap 'allow' for allowing permissions. Then the app crashes and I keep getting this error :

    &#xA;

    2021-03-22 13:42:51.534 31138-31138/com.example.camera E/AndroidRuntime: FATAL EXCEPTION: main&#xA;    Process: com.example.camera, PID: 31138&#xA;    java.lang.IllegalStateException: Could not find method sendMessage(View) in a parent or ancestor Context for android:onClick attribute defined on view class com.google.android.material.button.MaterialButton with id &#x27;button2&#x27;&#xA;        at androidx.appcompat.app.AppCompatViewInflater$DeclaredOnClickListener.resolveMethod(AppCompatViewInflater.java:436)&#xA;        at androidx.appcompat.app.AppCompatViewInflater$DeclaredOnClickListener.onClick(AppCompatViewInflater.java:393)&#xA;        at android.view.View.performClick(View.java:7161)&#xA;        at com.google.android.material.button.MaterialButton.performClick(MaterialButton.java:967)&#xA;        at android.view.View.performClickInternal(View.java:7133)&#xA;        at android.view.View.access$3500(View.java:804)&#xA;        at android.view.View$PerformClick.run(View.java:27416)&#xA;        at android.os.Handler.handleCallback(Handler.java:883)&#xA;        at android.os.Handler.dispatchMessage(Handler.java:100)&#xA;        at android.os.Looper.loop(Looper.java:241)&#xA;        at android.app.ActivityThread.main(ActivityThread.java:7617)&#xA;        at java.lang.reflect.Method.invoke(Native Method)&#xA;        at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:492)&#xA;        at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:941)&#xA;2021-03-22 13:42:51.546 31138-31138/com.example.camera I/Process: Sending signal. PID: 31138 SIG: 9&#xA;

    &#xA;

    What am I doing wrong ? Please advise.

    &#xA;

    edit

    &#xA;

    layout xml

    &#xA;

    &lt;?xml version="1.0" encoding="utf-8"?>&#xA;&#xA;&#xA;    &#xA;&#xA;    &#xA;&#xA;

    &#xA;