Recherche avancée

Médias (1)

Mot : - Tags -/bug

Autres articles (111)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

Sur d’autres sites (7993)

  • Using ffmpeg.exe making the computer slow in task manager it takes over 1GB of memory how can i fix it ?

    29 mai 2013, par Revuen Ben Dror

    In mt Form1 i have this code :

    private void StartRecording_Click(object sender, EventArgs e)
           {

               ffmp.Start("test.avi", 25);
               timer1.Enabled = true;
           }

    ffmp is a variable of my class : Ffmpeg
    In this class i add frames to a pipe and create an avi file.

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Threading.Tasks;
    using System.Drawing;
    using System.IO.Pipes;
    using System.Runtime.InteropServices;
    using System.Diagnostics;

    namespace ScreenVideoRecorder
    {
       class Ffmpeg
       {
           NamedPipeServerStream p;
           String pipename = "mytestpipe";
           byte[] b;
           System.Diagnostics.Process process;

           public Ffmpeg()
           {

           }

           public void Start(string FileName, int BitmapRate)
           {
               p = new NamedPipeServerStream(pipename, PipeDirection.Out, 1, PipeTransmissionMode.Byte);
               b = new byte[1920 * 1080 * 3]; // some buffer for the r g and b of pixels of an image of size 720p
               process = new System.Diagnostics.Process();
               process.StartInfo.FileName = @"D:\pipetest\pipetest\ffmpegx86\ffmpeg.exe";
               process.EnableRaisingEvents = false;
               process.StartInfo.WorkingDirectory = @"D:\pipetest\pipetest\ffmpegx86";
               process.StartInfo.Arguments = @"-f rawvideo -pix_fmt bgr0 -video_size 1920x1080 -i \\.\pipe\mytestpipe -map 0 -c:v libx264 -r " + BitmapRate + " " + FileName;
               process.Start();

               process.StartInfo.UseShellExecute = false;
               process.StartInfo.CreateNoWindow = false;

               p.WaitForConnection();
           }

           public void PushFrame(Bitmap bmp)
           {

               int length;
               // Lock the bitmap's bits.
               Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
               //Rectangle rect = new Rectangle(0, 0, 1280, 720);
               System.Drawing.Imaging.BitmapData bmpData =
                   bmp.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadOnly,
                   bmp.PixelFormat);

               int absStride = Math.Abs(bmpData.Stride);
               // Get the address of the first line.
               IntPtr ptr = bmpData.Scan0;

               // Declare an array to hold the bytes of the bitmap.
               //length = 3 * bmp.Width * bmp.Height;
               length = absStride * bmpData.Height;
               byte[] rgbValues = new byte[length];

               int j = bmp.Height - 1;
               for (int i = 0; i < bmp.Height; i++)
               {
                   IntPtr pointer = new IntPtr(bmpData.Scan0.ToInt32() + (bmpData.Stride * j));
                   System.Runtime.InteropServices.Marshal.Copy(pointer, rgbValues, absStride * (bmp.Height - i - 1), absStride);
                   j--;
               }

               p.Write(rgbValues, 0, length);

               bmp.UnlockBits(bmpData);

           public void Close()
           {
               p.Close();
           }
       }
    }

    The problem is when i'm running my application in this from my visual studio 2012 pro and click the button it's openning a console window and start the processing .

    I tracked over it through the task manager on ffmpeg.exe and saw that it started from 996mb and very quick jumped ot 1040mb memory usage. The cpu usage was only 16%

    Once i close ended this task the ffmpeg.exe everything was back to move smooth.
    When it's working and i tried to drag my Form around the screen for example it was moving slow and also with some stuttering .

    Closed the ffmpeg.exe and i could drag the Form around smooth and quick like in regular way as it should be.

    I tried to google for it and found some others with the same problem i think but i'm not sure where is the problem and how to fix it.

    I'm not sure what version of ffmpeg.exe i have i'm using now but i read in some places that it's not working better with new versions but maybe i mistake here .

    My windows is 8 with 6gb of ram .

  • How To Play Hardware Accelerated Video on A Mac

    28 mai 2013, par Multimedia Mike — General

    I have a friend who was considering purchasing a Mac Mini recently. At the time of this writing, there are 3 desktop models (and 2 more “server” models).


    Apple Mac Mini

    The cheapest one is a Core i5 2.5 GHz. Then there are 2 Core i7 models : 2.3 GHz and 2.6 GHz. The difference between the latter 2 is US$100. The only appreciable technical difference is the extra 0.3 GHz and the choice came down to those 2.

    He asked me which one would be able to play HD video at full frame rate. I found this query puzzling. But then, I have been “in the biz” for a bit too long. Whether or not a computer or device can play a video well depends on a lot of factors.

    Hardware Support
    First of all, looking at the raw speed of the general-purpose CPU inside of a computer as a gauge of video playback performance is generally misguided in this day and age. In general, we have a video standard (H.264, which I’ll focus on for this post) and many bits of hardware are able to accelerate decoding. So, the question is not whether the CPU can decode the data in real time, but can any other hardware in the device (likely the graphics hardware) handle it ? These machines have Intel HD 4000 graphics and, per my reading of the literature, they are capable of accelerating H.264 video decoding.

    Great, so the hardware supports accelerated decoding. So it’s a done deal, right ? Not quite…

    Operating System Support
    An application can’t do anything pertaining to hardware without permission from the operating system. So the next question is : Does Mac OS X allow an application to access accelerated video decoding hardware if it’s available ? This used to be a contentious matter (notably, Adobe Flash Player was unable to accelerate H.264 playback on Mac in the absence of such an API) but then Apple released an official API detailed in Technical Note TN2267.

    So, does this mean that video is magically accelerated ? Nope, we’re still not there yet…

    Application Support
    It’s great that all of these underlying pieces are in place, but if an individual application chooses to decode the video directly on the CPU, it’s all for naught. An application needs to query the facilities and direct data through the API if it wants to leverage the acceleration. Obviously, at this point it becomes a matter of “which application ?”

    My friend eventually opted to get the pricier of the desktop Mac Mini models and we ran some ad-hoc tests since I was curious how widespread the acceleration support is among Mac multimedia players. Here are some programs I wanted to test, playing 1080p H.264 :

    • Apple QuickTime Player
    • VLC
    • YouTube with Flash Player (any browser)
    • YouTube with Safari/HTML5
    • YouTube with Chrome/HTML5
    • YouTube with Firefox/HTML5
    • Netflix

    I didn’t take exhaustive notes but my impromptu tests revealed QuickTime Player was, far and away, the most performant player, occupying only around 5% of the CPU according to the Mac OS X System Profiler graph (which is likely largely spent on audio decoding).

    VLC consistently required 20-30% CPU, so it’s probably leveraging some acceleration facilities. I think that Flash Player and the various HTML5 elements performed similarly (their multi-process architectures can make such a trivial profiling test difficult).

    The outlier was Netflix running in Firefox via Microsoft’s Silverlight plugin. Of course, the inner workings of Netflix’s technology are opaque to outsiders and we don’t even know if it uses H.264. It may very well use Microsoft’s VC-1 which is not a capability provided by the Mac OS X acceleration API (it doesn’t look like the Intel HD 4000 chip can handle it either). I have never seen any data one way or another about how Netflix encodes video. However, I was able to see that Netflix required an enormous amount of CPU muscle on the Mac platform.

    Conclusion
    The foregoing is a slight simplification of the video playback pipeline. There are some other considerations, most notably how the video is displayed afterwards. To circle back around to the original question : Can the Mac Mini handle full HD video playback ? As my friend found, the meager Mac Mini can do an admirable job at playing full HD video without loading down the CPU.

  • How to convert H264 RTP stream from PCAP to a playable video file

    21 août 2014, par yoosha

    I have captured stream of H264 in PCAP files and trying to create media files from the data. The container is not important (avi,mp4,mkv,…).
    When I’m using videosnarf or rtpbreak (combined with python code that adds 00 00 00 01 before each packet) and then ffmpeg, the result is OK only if the input frame rate is constant (or near constant). However, when the input is vfr, the result plays too fast (and on same rare cases too slow).
    For example :

    videosnarf -i captured.pcap –c
    ffmpeg -i H264-media-1.264 output.avi

    After doing some investigation of the issue I believe now that since the videosnarf (and rtpbreak) are removing the RTP header from the packets, the timestamp is lost and ffmpeg is referring to the input data as cbr.

    1. I would like to know if there is a way to pass (on a separate file ?)
      the timestamps vector or any other information to ffmpeg so the
      result will be created correctly ?
    2. Is there any other way I can take the data out of the PCAP file and play it or convert it and then play it ?
    3. Since all work is done in Python, any suggestion of libraries/modules that can help with the work (even if requires some codding) is welcome as well.

    Note : All work is done offline, no limitations on the output. It can be cbr/vbr, any playable container and transcoding. The only "limitation" I have : it should all run on linux…

    Thanks
    Y

    Some additional information :
    Since the nothing provides the FFMPEG with the timestamp data, i decided to try a different approach : skip videosnarf and use Python code to pipe the packets directly to ffmpeg (using the "-f -i -" options) but then it refuses to accept it unless I provide an SDP file...
    How do I provide the SDP file ? is it an additional input file ? ("-i config.sdp")

    The following code is an unsuccessful try doing the above :

    import time  
    import sys  
    import shutil  
    import subprocess  
    import os  
    import dpkt  

    if len(sys.argv) < 2:  
       print "argument required!"  
       print "txpcap <pcap file="file">"  
       sys.exit(2)  
    pcap_full_path = sys.argv[1]  

    ffmp_cmd = ['ffmpeg','-loglevel','debug','-y','-i','109c.sdp','-f','rtp','-i','-','-na','-vcodec','copy','p.mp4']  

    ffmpeg_proc = subprocess.Popen(ffmp_cmd,stdout = subprocess.PIPE,stdin = subprocess.PIPE)  

    with open(pcap_full_path, "rb") as pcap_file:  
       pcapReader = dpkt.pcap.Reader(pcap_file)  
       for ts, data in pcapReader:  
           if len(data) &lt; 49:  
               continue  
           ffmpeg_proc.stdin.write(data[42:])

    sout, err = ffmpeg_proc.communicate()  
    print "stdout ---------------------------------------"  
    print sout  
    print "stderr ---------------------------------------"  
    print err  
    </pcap>

    In general this will pipe the packets from the PCAP file to the following command :

    ffmpeg -loglevel debug -y -i 109c.sdp -f rtp -i - -na -vcodec copy p.mp4

    SDP file : [RTP includes dynamic payload type # 109, H264]

    v=0
    o=- 0 0 IN IP4 ::1
    s=No Name
    c=IN IP4 ::1
    t=0 0
    a=tool:libavformat 53.32.100
    m=video 0 RTP/AVP 109
    a=rtpmap:109 H264/90000
    a=fmtp:109
    packetization-mode=1 ;profile-level-id=64000c ;sprop-parameter-sets=Z2QADKwkpAeCP6wEQAAAAwBAAAAFI8UKkg==,aMvMsiw= ;
    b=AS:200

    Results :

    ffmpeg version 0.10.2 Copyright (c) 2000-2012 the FFmpeg developers
    built on Mar 20 2012 04:34:50 with gcc 4.4.6 20110731 (Red Hat
    4.4.6-3) configuration : —prefix=/usr —libdir=/usr/lib64 —shlibdir=/usr/lib64 —mandir=/usr/share/man —enable-shared —enable-runtime-cpudetect —enable-gpl —enable-version3 —enable-postproc —enable-avfilter —enable-pthreads —enable-x11grab —enable-vdpau —disable-avisynth —enable-frei0r —enable-libopencv —enable-libdc1394 —enable-libdirac —enable-libgsm —enable-libmp3lame —enable-libnut —enable-libopencore-amrnb —enable-libopencore-amrwb —enable-libopenjpeg —enable-librtmp —enable-libschroedinger —enable-libspeex —enable-libtheora —enable-libvorbis —enable-libvpx —enable-libx264 —enable-libxavs —enable-libxvid —extra-cflags=’-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector —param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC’ —disable-stripping libavutil 51. 35.100 / 51. 35.100 libavcodec 53. 61.100 / 53. 61.100 libavformat 53. 32.100
    / 53. 32.100 libavdevice 53. 4.100 / 53. 4.100
    libavfilter 2. 61.100 / 2. 61.100 libswscale 2. 1.100
    / 2. 1.100 libswresample 0. 6.100 / 0. 6.100
    libpostproc 52. 0.100 / 52. 0.100 [sdp @ 0x15c0c00] Format sdp
    probed with size=2048 and score=50 [sdp @ 0x15c0c00] video codec set
    to : h264 [NULL @ 0x15c7240] RTP Packetization Mode : 1 [NULL @
    0x15c7240] RTP Profile IDC : 64 Profile IOP : 0 Level : c [NULL @
    0x15c7240] Extradata set to 0x15c78e0 (size : 36) !error,_recognition
    separate : 1 ; 1 [h264 @ 0x15c7240] error,_recognition combined : 1 ;
    10001 [sdp @ 0x15c0c00] decoding for stream 0 failed [sdp @
    0x15c0c00] Could not find codec parameters (Video : h264) [sdp @
    0x15c0c00] Estimating duration from bitrate, this may be inaccurate
    109c.sdp : could not find codec parameters Traceback (most recent
    call last) : File "./ffpipe.py", line 26, in
    ffmpeg_proc.stdin.write(data[42 :]) IOError : [Errno 32] Broken pipe

    (forgive the mass above, the editor keep on complaining about code that is not indented OK ??)

    I’m working on this issue for days... any help/suggestion/hint will be appreciated.