Recherche avancée

Médias (91)

Autres articles (94)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (8812)

  • How can i save the preview video in pictureBox1 to a avi/mp4 video file on hard disk using directshow ?

    10 avril 2016, par benny dayag

    The first problem maybe it’s not a problem but for some reason the video preview in the pictureBox1 is working but the frame rate seems not right. I can’t figure how to set/change it. The preview video seems a bit dart to the eyes not flickering but not moving smooth.

    The main problem is how to save the preview in the pictureBox1 or directly the streaming to a video file ? The MediaSubtype i’m getting is h.264

    The video is from the device legato game capture.

    using System;
    using System.Collections.Generic;
    using System.ComponentModel;
    using System.Data;
    using System.Drawing;
    using System.Linq;
    using System.Text;
    using System.Threading.Tasks;
    using System.Windows.Forms;
    using DirectShowLib;
    using DirectShowLib.BDA;
    using DirectShowLib.DES;
    using DirectShowLib.DMO;
    using DirectShowLib.Dvd;
    using DirectShowLib.MultimediaStreaming;
    using DirectShowLib.SBE;
    using System.Runtime.InteropServices;
    using System.Management;
    using System.IO;
    using System.Drawing.Imaging;


    namespace Youtube_Manager
    {

       public partial class Elgato_Video_Capture : Form
       {


           IFileSinkFilter sink;

           IFilterGraph2 graph;
           ICaptureGraphBuilder2 captureGraph;
           System.Drawing.Size videoSize;

           string error = "";
           List devices = new List();
           IMediaControl mediaControl;

           public Elgato_Video_Capture()
           {
               InitializeComponent();



               if (comboBox1.Items.Count == 0)
               {
                   for (int xx = 1; xx <= 8; xx++)
                   {
                       comboBox1.Items.Add(xx);
                   }
               }

               InitDevice();
               timer1.Start();
           }

           IBaseFilter smartTeeFilter;
           IPin outPin;
           IPin inPin;
           private void InitDevice()
           {
               try
               {
                   //Set the video size to use for capture and recording
                   videoSize = new Size(827, 505);//1280, 720);

                   //Initialize filter graph and capture graph
                   graph = (IFilterGraph2)new FilterGraph();
                   captureGraph = (ICaptureGraphBuilder2)new CaptureGraphBuilder2();
                   captureGraph.SetFiltergraph(graph);
                   //Create filter for Elgato
                   Guid elgatoGuid = new Guid("39F50F4C-99E1-464A-B6F9-D605B4FB5918");
                   Type comType = Type.GetTypeFromCLSID(elgatoGuid);
                   IBaseFilter  elgatoFilter = (IBaseFilter)Activator.CreateInstance(comType);
                   graph.AddFilter(elgatoFilter, "Elgato Video Capture Filter");

                   //Create smart tee filter, add to graph, connect Elgato's video out to smart tee in
                   smartTeeFilter = (IBaseFilter)new SmartTee();

                   graph.AddFilter(smartTeeFilter, "Smart Tee");
                   outPin = GetPin(elgatoFilter, "Video");
                   inPin = GetPin(smartTeeFilter, "Input");
                   SetAndGetAllAvailableResolution(outPin);
                   graph.Connect(outPin, inPin);


                   //Create video renderer filter, add it to graph, connect smartTee Preview pin to video renderer's input pin
                   IBaseFilter videoRendererFilter = (IBaseFilter)new VideoRenderer();

                   graph.AddFilter(videoRendererFilter, "Video Renderer");
                   outPin = GetPin(smartTeeFilter, "Preview");

                   inPin = GetPin(videoRendererFilter, "Input");
                   graph.Connect(outPin, inPin);

                  // int hr = graph.Connect(outPin, inPin); ;
                  // DsError.ThrowExceptionForHR(hr);

                   captureGraph.SetOutputFileName(MediaSubType.Avi, @"e:\screenshots\test1.mp4", out smartTeeFilter, out sink);

                   //Render stream from video renderer
                   captureGraph.RenderStream(PinCategory.VideoPort, MediaType.Video, videoRendererFilter, null, null);
                   //Set the video preview to be the videoFeed panel
                   IVideoWindow vw = (IVideoWindow)graph;
                   vw.put_Owner(pictureBox1.Handle);
                   vw.put_MessageDrain(this.Handle);
                   vw.put_WindowStyle(WindowStyle.Child | WindowStyle.ClipSiblings | WindowStyle.ClipChildren);
                   vw.SetWindowPosition(0, 0, 827, 505);

                   //Start the preview
                   mediaControl = graph as IMediaControl;
                   mediaControl.Run();
               }
               catch (Exception err)
               {
                   error = err.ToString();
               }
           }

            IPin GetPin(IBaseFilter filter, string pinname)
           {
               IEnumPins epins;
               int hr = filter.EnumPins(out epins);
               checkHR(hr, "Can't enumerate pins");
               IntPtr fetched = Marshal.AllocCoTaskMem(4);
               IPin[] pins = new IPin[1];
               while (epins.Next(1, pins, fetched) == 0)
               {
                   PinInfo pinfo;
                   pins[0].QueryPinInfo(out pinfo);
                   bool found = (pinfo.name == pinname);
                   DsUtils.FreePinInfo(pinfo);
                   if (found)
                       return pins[0];
               }
               checkHR(-1, "Pin not found");
               return null;
           }

           public  void checkHR(int hr, string msg)
           {
               if (hr < 0)
               {
                   MessageBox.Show(msg);
                   DsError.ThrowExceptionForHR(hr);
               }



           }

           public void SetAndGetAllAvailableResolution(IPin VideoOutPin)
           {
               int hr = 0;
               IAMStreamConfig streamConfig = (IAMStreamConfig)VideoOutPin;
               AMMediaType searchmedia;
               AMMediaType CorectvidFormat = new AMMediaType();
               IntPtr ptr;
               int piCount, piSize;
               hr = streamConfig.GetNumberOfCapabilities(out piCount, out piSize);
               ptr = Marshal.AllocCoTaskMem(piSize);
               for (int i = 0; i < piCount; i++)
               {
                   hr = streamConfig.GetStreamCaps(i, out searchmedia, ptr);
                   VideoInfoHeader v = new VideoInfoHeader();

                   Marshal.PtrToStructure(searchmedia.formatPtr, v);
                   if (i == 2)// 4
                   {
                       CorectvidFormat = searchmedia;
                   }
               }
               hr = streamConfig.SetFormat(CorectvidFormat);

               IntPtr pmt = IntPtr.Zero;
               AMMediaType mediaType = new AMMediaType();
               IAMStreamConfig streamConfig1 = (IAMStreamConfig)VideoOutPin;
               hr = streamConfig1.GetFormat(out mediaType);
               BitmapInfoHeader bmpih = new BitmapInfoHeader();
               Marshal.PtrToStructure(mediaType.formatPtr, bmpih);
           }
     }
    }

    I tried to use this line to save the video to a video file but the video file on hard disk is 0 KB so I guess it’s a wrong way to do it.
    I also thought somehow to save each frame(bitmap image) from the pictureBox1 to the hard disk or maybe the memory and use ffmpeg to create/build a video file in real time from each saved frame but I can’t get/save the images(frames) from the pictureBox1 for some reason.

    I tried using DrawToBitmap but all the frames(bitmaps on hard disk saved) are empty size 2.24 KB

    captureGraph.SetOutputFileName(MediaSubType.Avi, @"e:\screenshots\test1.mp4", out smartTeeFilter, out sink);

    This is how I tried to get the frames from the pictureBox1

    public static int counter = 0;
           private void timer1_Tick(object sender, EventArgs e)
           {
               counter++;
               Bitmap bmp = new Bitmap(pictureBox1.ClientSize.Width, pictureBox1.ClientSize.Height);
               pictureBox1.DrawToBitmap(bmp, pictureBox1.ClientRectangle);
               bmp.Save(@"e:\screenshots\" + "screenshot" + counter.ToString("D6") + ".bmp");
               bmp.Dispose();
           }
  • Broadcast mjpeg stream via websocket using ffmpeg and Python Tornado

    25 février 2016, par Asampaiz

    Well, i have been strugling for weeks now. Searching and reading a hundred of pages and nearly surrender.

    I need your help, this is the story : I want to stream my Logitech C930e webcam (connected to Raspi 2) to web browser. I have tried so many different way, such as using ffserver to pass the stream from ffmpeg to the web browser, but all of that is using same basic, it’s all need a re-encoding. ffserver will always re-encode the stream that passed by ffmpeg, no matter it is already on the right format or not. My webcam have built-in video encoding to mjpeg until 1080p, so that is the reason why i use this webcam, i don’t want using all of Raspi 2 resource just for encoding those stream.

    This approach end up in eating all my Raspi 2 Resources.

    Logitech C930e ---mjpeg 720p (compressed) instead of rawvideo---> ffmjpeg (copy, no reencoding) —http—> ffserver(mjpeg, reencoding to mjpeg ;this is the problem) —http—> Firefox

    My new approach

    Logitech C930e ---mjpeg 720p (compressed) instead of rawvideo---> ffmjpeg (copy, no reencoding —pipe—> Python3 (using tornado as the web framework) —websocket—> Firefox

    The problem of the new approach

    The problem is i can not make sure the stream format that passed by ffmpeg via pipe to Python is ready | compatible to be streamed to browser via websocket. I mean i already do all these step above but the result is unreadable image shown in the browser (like TV lost signal).

    1. I need help figuring out how to feed python the right mjpeg stream format with ffmpeg
    2. I need help on the client side (javascript) how to show the binary message that sent via websocket (the mjpeg stream)

    This is my current script

    Executing ffmpeg in Python (pipe) - Server Side

    --- cut ---
           multiprocessing.Process.__init__(self)
           self.resultQ = resultQ
           self.taskQ = taskQ
           self.FFMPEG_BIN = "/home/pi/bin/ffmpeg"
           self.video_w = 1280
           self.video_h = 720
           self.video_res = '1280x720'
           self.webcam = '/dev/video0'
           self.frame_rate = '10'
           self.command = ''
           self.pipe = ''
           self.stdout = ''
           self.stderr = ''

       #Start the ffmpeg, this parameter need to be ajusted,
       #video format already tried rawvide, singlejpeg, mjpeg
       #mpjpeg, image2pipe
       #i need help here (to make sure the format is right for pipe)
       def camera_stream_start(self):
               self.command = [ self.FFMPEG_BIN,
                   '-loglevel', 'debug',
                   '-y',
                   '-f', 'v4l2',
                   '-input_format', 'mjpeg',
                   '-s', self.video_res,
                   '-r', self.frame_rate,
                   '-i', self.webcam,
                   '-c:v', 'copy',
                   '-an',
                   '-f', 'rawvideo',
                   #'-pix_fmts', 'rgb24',
                   '-']
               self.pipe = sp.Popen(self.command, stdin=sp.PIPE, stdout = sp.PIPE, shell=False)
               #return self.pipe

       #stop ffmpeg
       def camera_stream_stop(self):
           self.pipe.stdout.flush()
           self.pipe.terminate()
           self.pipe = ''
           #return self.pipe

       def run(self):
           #start stream
           self.camera_stream_start()
           logging.info("** Camera process started")
           while True:
               #get the stream from pipe,
               #this part is also need to be ajusted
               #i need help here
               #processing the stream read so it can be
               #send to browser via websocket
               stream = self.pipe.stdout.read(self.video_w*self.video_h*3)

               #reply format to main process
               #in main process, the data will be send over binary websocket
               #to client (self.write_message(data, binary=True))
               rpl = {
                   'task' : 'feed',
                   'is_binary': True,
                   'data' : stream
               }
               self.pipe.stdout.flush()
               self.resultQ.put(rpl)
               #add some wait
               time.sleep(0.01)
           self.camera_stream_stop()
           logging.info("** Camera process ended")

    ffmpeg output

    --- Cut ---    
    Successfully opened the file.
    Output #0, rawvideo, to 'pipe:':
     Metadata:
       encoder         : Lavf57.26.100
       Stream #0:0, 0, 1/10: Video: mjpeg, 1 reference frame, yuvj422p(center), 1280x720 (0x0), 1/10, q=2-31, -1 kb/s, 10 fps, 10 tbr, 10 tbn, 10 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
    Press [q] to stop, [?] for help
    --- Cut ---    

    JavaScript websocket - on the client side

    --- Cut ---
    socket = new WebSocket(url, protocols || []);
    socket.binaryType = "arraybuffer";

    socket.onmessage = function (message) {
       //log.debug(message.data instanceof ArrayBuffer);
       //This is for the stream that sent via websocket
       if(message.data instanceof ArrayBuffer)
       {
           //I need help here
           //How can i process the binary stream
           //so its can be shown in the browser (img)
           var bytearray = new Uint8Array(message.data);
           var imageheight = 720;
           var imagewidth = 1280;

           var tempcanvas = document.createElement('canvas');
           tempcanvas.height = imageheight;
           tempcanvas.width = imagewidth;
           var tempcontext = tempcanvas.getContext('2d');

           var imgdata = tempcontext.getImageData(0,0,imagewidth,imageheight);

           var imgdatalen = imgdata.data.length;

           for(var i=8;i/this is for ordinary string that sent via websocket
       else{
           pushData = JSON.parse(message.data);
           console.log(pushData);
       }

    --- Cut ---

    Any help, feedback or anything is very appreciated. If something not clear please advice me.

  • Cannot get JACK Audio/Netjack working over LAN

    19 septembre 2016, par James

    I’m trying to stream low latency audio between 2 raspberry pis. Both gstreamer and ffmpeg induce 2+ second delays for me.

    I’ve played around with Jack Audio and locally on a single pi it seems promising. I can route mic input to a speaker locally and it is almost instantaneous.

    However, I have been having trouble getting it to route between devices using Netjack.

    # ON SERVER
    jackd -P70 -p16 -t2000 -dalsa -dhw:1 -p128 -n3 -r44100 -s

    # ON CLIENT
    jackd -v -R -P70 -dnetone -i1 -o1 -I0 -O0  -r44100 -p128 -n3

    # ON SERVER
    jack_netsource -H < ip address of client >
    jack_lsp # list availible connection ports

    >system:capture_1
    >system:playback_1
    >system:playback_2
    >netjack:capture_1
    >netjack:capture_2
    >netjack:capture_3
    >netjack:playback_1
    >netjack:playback_2
    >netjack:playback_3

    jack_connect system:capture_1 system:playback_1 # this works
    jack_connect system:capture_1 netjack:playback_1 # this doesn't work :(

    Most of the launch options I pulled from here http://wiki.linuxaudio.org/wiki/raspberrypi#using_jack. I’ll be honest I don’t really know what they do.

    The client jackd output shows messages like

    Jack: data not valid
    Jack: data not valid
    Jack: JackSocketServerChannel::Execute : fPollTable i = 1 fd = 6
    Jack: JackRequest::Notification
    Jack: JackEngine::ClientNotify: no callback for notification = 3
    Jack: JackEngine::ClientNotify: no callback for notification = 3
    netxruns... duration: 139ms
    Jack: JackSocketServerChannel::Execute : fPollTable i = 1 fd = 6
    Jack: JackRequest::Notification
    Jack: JackEngine::ClientNotify: no callback for notification = 3
    Jack: JackEngine::ClientNotify: no callback for notification = 3

    And the server jack_netsource output looks like

    current latency 114
    current latency 20
    current latency 27
    current latency 29
    current latency 48
    current latency 23
    current latency 33
    current latency 28
    current latency 41
    current latency 84
    current latency 44

    and the server jackd output looks like

    JackAudioDriver::ProcessGraphAsyncMaster: Process error
    JackAudioDriver::ProcessGraphAsyncMaster: Process error
    JackAudioDriver::ProcessGraphAsyncMaster: Process error
    JackAudioDriver::ProcessGraphAsyncMaster: Process error
    JackEngine::XRun: client = netjack was not finished, state = Triggered
    JackAudioDriver::ProcessGraphAsyncMaster: Process error
    JackAudioDriver::ProcessGraphAsyncMaster: Process error
    JackEngine::XRun: client = netjack was not finished, state = Triggered
    JackEngine::XRun: client = netjack was not finished, state = Triggered

    I believe the -dnetone flag indicates to use Netjack2. Netjack 1, which I’ve tried with the -dnet flag results in a single Not Connected message from jack_netsource and :

    Jack: CatchHost fd = 5 err = Resource temporarily unavailable
    Jack: CatchHost fd = 5 err = Resource temporarily unavailable
    Jack: CatchHost fd = 5 err = Resource temporarily unavailable
    Jack: CatchHost fd = 5 err = Resource temporarily unavailable
    Jack: CatchHost fd = 5 err = Resource temporarily unavailable
    Jack: JackSocketServerChannel::Execute : fPollTable i = 1 fd = 6

    from the client jackd.