Recherche avancée

Médias (91)

Autres articles (75)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (9151)

  • FFMPEG - broadcast "infinite" mp4 udp via HLS

    18 janvier 2023, par BladesV

    I have a script which sends an mp4 video via udp in an infinite loop, simulating a live stream.
I want to broadcast it via HLS live.

    


    I use the following command :

    


    


    ffmpeg -i udp :// : -c:v libx264 -b:v 3000k -c:a aac -b:a 128k -f hls -hls_time 10 -hls_list_size 0 -hls_segment_filename /output%03d.ts

    


    


    But it doesn't work.
Ffmpeg produces no output most of the time, except when I try to send a very small video, in which case it seems to work, but only after it is done receiving the whole video.

    


    Any help would be appreciated.
Other methods to accomplish the same as well.

    


  • How to get state of the streaming source in GStreamer-Python ?

    17 février 2023, par doruk.sonmez

    I'm trying to analyze and take actions depending on the current state of the streaming source in gstreamer. I have a basic script to create pipeline elements, linking them and eventually see the live IP camera stream on my screen. However, most of the IP cameras seems to stop streaming the video at some point. The camera IP is accessible but RTSP drops the connection or displaying an all-black screen. I want to detect when the stream gets dropped and create a controlling interval to reconnect to the stream again. I'm already listening some bus messages at runtime but it doesn't seem like any of them is providing what I need.

    


    It would be great to have some ideas on how to check the state of the stream at any given time.

    


    Here is a basic blocks from my code :

    


    def on_src_pad_added(src, new_pad, depayer):
    sink_pad = depayer.get_static_pad("sink")

    if(sink_pad.is_linked()):
        print("We are already linked. Ignoring.")
        return

    # check the new pad's type
    new_pad_caps = new_pad.get_current_caps()
    new_pad_struct = new_pad_caps.get_structure(0)
    new_pad_type = new_pad_struct.get_name()

    ret = new_pad.link(sink_pad)
    return

def gst_to_opencv(sample):
    buf = sample.get_buffer()
    caps = sample.get_caps()

    arr = np.ndarray(
        (caps.get_structure(0).get_value('height'),
         caps.get_structure(0).get_value('width'),
         3),
        buffer=buf.extract_dup(0, buf.get_size()),
        dtype=np.uint8)
    return arr

def new_buffer(sink, data):
    global image_arr
    sample = sink.emit("pull-sample")
    arr = gst_to_opencv(sample)
    image_arr = arr
    return Gst.FlowReturn.OK


    


    After these callbacks I'm constructing my pipeline :

    


    def main():
    # Standard GStreamer initialization
    GObject.threads_init()
    Gst.init(None)

    # Create gstreamer elements
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")

    # Source element for reading from the file
    print("Creating Source \n ")
    source = Gst.ElementFactory.make("rtspsrc", "rtsp-cam-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")

    depay = Gst.ElementFactory.make("rtph264depay", "rtp-depay")
    if not depay:
        sys.stderr.write(" Unable to create videoconvert \n")

    parser = Gst.ElementFactory.make("h264parse", "h264-parser")
    if not parser:
        sys.stderr.write(" Unable to create videoconvert \n")

    decoder = Gst.ElementFactory.make("avdec_h264", "h264-decoder")
    if not decoder:
        sys.stderr.write(" Unable to create videoconvert \n")
...

Set plugin properties...
Add plugins to the pipeline...
Link plugins...
...



    


    Lastly, my live streaming and message listening block is as follows :

    


    ...
# start play back and listen to events
    print("Starting pipeline \n")
    ret = pipeline.set_state(Gst.State.PLAYING)
    if ret == Gst.StateChangeReturn.FAILURE:
        print("Unable to set the pipeline to the playing state.")
        exit(-1)

    # create an event loop and feed gstreamer bus mesages to it
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    
    # Parse message
    while True:
        pipe_state = pipeline.get_state(Gst.CLOCK_TIME_NONE)
        print(pipe_state.state)

        message = bus.timed_pop_filtered(10000, Gst.MessageType.ANY)
        if image_arr is not None:   
            cv2.imshow("Receive Image from Pipeline Buffer", image_arr)
            if cv2.waitKey(1) == ord('q'):
                break
        if message:
            if message.type == Gst.MessageType.ERROR:
                err, debug = message.parse_error()
                print(("Error received from element %s: %s" % (
                    message.src.get_name(), err)))
                print(("Debugging information: %s" % debug))
                break
            elif message.type == Gst.MessageType.EOS:
                print("End-Of-Stream reached.")
                break
            elif message.type == Gst.MessageType.STATE_CHANGED:
                if isinstance(message.src, Gst.Pipeline):
                    old_state, new_state, pending_state = message.parse_state_changed()
                    print(("Pipeline state changed from %s to %s." %
                        (old_state.value_nick, new_state.value_nick)))
            else:
                # print(message.type)
                continue

    # cleanup
    pipeline.set_state(Gst.State.NULL)
    pipeline.send_event(Gst.Event.new_eos())


    


  • Best logical formula to determine perceptual / "experienced" quality of a video, given resolution / fps and bitrate ?

    20 mars 2023, par JamesK

    I am looking for a formula that can provide me with a relatively decent approximation of a Video's playback quality that can be calculated based off of four metrics : width, height, fps, and bitrate (bits/sec). Alternatively, I can also use FFMPEG or similar tools to calculate a Video's playback quality, if any of those tools provide something like what I am looking for here.

    


    An example of what a Video might look like in my problem is as follows :

    


    interface Video {
  /** The width of the Video (in pixels). */
  width: number
  /** The height of the Video (in pixels). */
  height: number
  /** The frame rate of the Video (frames per second). */
  fps: number
  /** The bitrate of the video, in bits per second (e.g. 5_000_000 = 5Mbit/sec) */
  bitrate: number
}


    


    I came up with the following function to compute the average amount of bits available for any given pixel per second :

    


    const computeVideoQualityScalar = (video: Video): number => {
  // The amount of pixels pushed to the display, per frame.
  const pixelsPerFrame = video.width * video.height
  
  // The amount of pixels pushed to the display, per second.
  const pixelsPerSecond = pixelsPerFrame * video.fps
  
  // The average amount of bits used by each pixel, each second,
  // to convey all data relevant to that pixel (e.g. color data, etc)
  const bitsPerPixelPerSecond = video.bitrate / pixelsPerSecond
  
  return bitsPerPixelPerSecond
}


    


    While my formula does do a good job of providing a more-or-less "standardized" assessment of mathematical quality for any given video, it falls short when I try to use it to compare videos of different resolutions to one another. For example, a 1080p60fps video with a bitrate of 10Mbit/sec has a greater visual fidelity (at least, subjectively speaking, to my eyes) than a 720p30fps video with a bitrate of 9Mbit/sec, but my formula would score the 720p30fps video significantly higher than the 1080p60fps video because the 720p video has more bits available per pixel per second than the 1080p video.

    


    I am struggling to come up with ideas as to how to either come up with a different way to calculate the "subjective video quality" for a given video, or extend upon my existing idea here.