Recherche avancée

Médias (91)

Autres articles (75)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (10323)

  • Error to encode audio though out the video in FFmpegFrameRecorder

    28 mars 2016, par Ragghwendra Suryawanshi

    Hello Stack World,

    I am trying to crate video from images in android using FFMpeg and
    javacv. I am able to crate video from images with out audio, when i try
    the same thing with audio video is crated but audio is just for 1
    sec of the video

           FFmpegFrameRecorder myFFmpegFrameRecorder = new FFmpegFrameRecorder(new StringBuilder(String.valueOf(strPath)).append("/").append(this.FileName).toString(), 640, 480, frameGrabber.getAudioChannels());
           myFFmpegFrameRecorder.setVideoCodec(13);
           myFFmpegFrameRecorder.setFormat("mp4");
           myFFmpegFrameRecorder.setPixelFormat(0);
           myFFmpegFrameRecorder.setSampleFormat(frameGrabber.getSampleFormat());
           myFFmpegFrameRecorder.setSampleRate(44100);
           myFFmpegFrameRecorder.setFrameRate(1.0d);
           myFFmpegFrameRecorder.setVideoBitrate(AccessibilityEventCompat.TYPE_TOUCH_INTERACTION_START);
           myFFmpegFrameRecorder.setAudioCodec(avcodec.AV_CODEC_ID_MP3);
           boolean isAudioFinish = false;
           try {
               frameGrabber.start();
               IplImage iplimage = new IplImage();
               myFFmpegFrameRecorder.start();
               for (int i = 0; i <= imgname - 1; i++) {
                   for (int j = 0; j <= 6; j++) {
                       Frame frame = frameGrabber.grabFrame();
                       if (frame != null) {
                           myFFmpegFrameRecorder.record(frame);
                       }
                       long l1 = 1000L * (System.currentTimeMillis() - l);
                       if (l1 < myFFmpegFrameRecorder.getTimestamp()) {
                           l1 = 1000L + myFFmpegFrameRecorder.getTimestamp();
                       }
                       myFFmpegFrameRecorder.setTimestamp(l1);
                   }

                   iplimage = opencv_highgui.cvLoadImage(myObjects.get(i).toString());
                   myFFmpegFrameRecorder.record(iplimage);
                   opencv_core.cvReleaseImage(iplimage);
               }
               myFFmpegFrameRecorder.stop();
               frameGrabber.stop();
           } catch (Exception e) {
               e.printStackTrace();
           }

    Please help me to solve it. I am missing something due to which it not working out here. I have read doc of the FFmpegFrameRecorder but unable to find my error.

  • poster adjustment after getting it from video

    30 juin 2016, par Jass

    I am capturing poster from this video https://www.youtube.com/watch?v=wg-kEWsL6Xc using following code. Video can be any, I am just giving example.

    shell_exec('ffmpeg -y  -itsoffset -4  -i "'.base_path().'/assets/videos/'.$file_name.'.mp4" -vcodec mjpeg -vframes 1 -an -f rawvideo -filter:v scale=1170:ih*1170/iw "'.base_path().'/assets/videos/thumbnail/'.$file_name.'.jpg"')

    Width = 1170px,

    Height= 300px (as it is background image so it does not matter)

    But I am facing adjustment problem. Following are my current views

    First : Head is cutting, here I am using css property background : url(’completepath’) center no-repeat ;
    enter image description here

    Second : Showing black portion on top of head. This is the original image that I am getting from ffmpeg code.

    enter image description here

    My Question : But I need image like in you tube, you can notice that in all videos we can see top portion of humans without being cutting even black portion is well adjusted.

    Currently I am downloading and testing with youtube videos but in actual job seekers will make their own videos using android/iphone or any camera and will upload on our site.

    so how i can overcome this issue means show image with proper human face and body ?

    thanks in advance.

  • Parsing avconv/ffmpeg rawvideo output ?

    23 avril 2013, par DigitalMan

    I'm about to begin a project that will involve working with the output of avconv/ffmpeg, pixel-by-pixel, in rgb32 format. I intend to work with a raw byte stream, such as from the pipe protocol. Basic pointer arithmetic (C/C++) will be used to iterate over these pixels, and modify them in arbitrary manners in real-time.

    I've created a very small file using rawvideo format and codec, and opened it up in a hex editor. As expected, it's just a series of pixels, read right to left, top to bottom. No distinguishing between lines - no problem, if you know how wide the video is beforehand. No distinguishing between frames - no problem, if you also know how tall the video is. No file header for frame rate, or even what the encoding (rgb32, rgb24, yuv, etc.) is - again, as long as you already know, it can be worked with.

    The problem occurs when - for one reason or another - some bytes are missing. Maybe the stream isn't being examined from the beginning, which is likely be the case in my project, or maybe something just got lost. All the pre-existing knowledge in the world (besides maybe a byte count of what's been missed, not gonna happen) won't prevent it from happily chugging along, with an incorrect offset of line and frame.

    So, what I'm looking for is an option for rawvideo, or possibly some other format/codec, that will allow me to work with the resulting stream at the pixel level, in RGB, yet still have a clear definition of where a new frame begins, even if it happens to start "looking" in the middle of a frame. (Width, height, and framerate will indeed be known.)