Recherche avancée

Médias (1)

Mot : - Tags -/berlin

Autres articles (63)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (9366)

  • Matplotlib animation quality loss

    28 juin 2017, par Doe a

    I’m currently trying to use matplotlib to animate a grid using imshow. However, I am finding a fairly significant quality loss in the animation.

    Below is a simple animation that illustrates my issue fairly well. If you look at any particular frame of imagelist, you will see that there is no aliasing or gradient between colours. But if I then look at one frame of my animation, that is no longer the case. There is now a 4-5 pixel gradient between my blocks of colour. Does anyone know how I can get around this compression ? The file size of the animation doesn’t matter too much, as long as I can get a good quality animation.

    #Create list of images
    image=np.array([[[0,0,0],(0.7,0,0),(0,0,0),(0,0,0.8)]])
    imagelist=[]
    for i in range(0,100):
       image[0][1][0]=i/100
       imagelist.append(np.copy(image))

    #Create figure    
    fig = plt.figure()
    plt.axis('off')
    im = plt.imshow(imagelist[0], vmin=0, vmax=255,interpolation='none');

    #Animation
    def updatefig(j):
       im.set_array(imagelist[j])
       return [im]
    ani = animation.FuncAnimation(fig, updatefig, frames=range(len(imagelist)), interval=20, blit=False)

    #Save animation
    FFMpegWriter = animation.writers['ffmpeg']
    mywriter = FFMpegWriter(fps=30, bitrate=5000)
    ani.save("test.mp4", writer=mywriter,codec="libx264")
  • Live video from raw tcp packets

    7 juin 2017, par benuuts

    we are trying to make a small python app that display a live video from sniffed packets using scapy and ffplay. This is part of our master degree research project. The goal is to make a proof of concept app that spies on video transimitted over tcp.
    We have a working script that writes into a .dat file and then we read it using ffplay. It works ok but have a lot of latency and we think we could do better : directly stream it into ffplay without the need to write raw data in a file.

    Here’s our script :

    from scapy.all import *
    import os

    export_dat = open("data.dat", "a")

    def write_packet_raw(packet):
       export_dat.write(str(packet.getlayer(Raw)))

    def realtime_packet():
       p = sniff(iface="wlan0", filter="tcp and (port 5555)", count=5000, prn=write_packet_raw)

    realtime_packet()
    export_dat.close()

    And then we launch : ffplay -window_title Videostream -framedrop -infbuf -f h264 -i data.dat

    Any idea on how we can achieve that ? thanks.

  • MPEG-DASH Livestreaming using ffmpeg, avconv, MP4Box and Dashjs

    28 avril 2017, par Sushant Mongia

    I’m working on delivering Live Streaming with DASH capabilities. Long story short, it’s a very crude testbed setup so I might be off the mark in some respects. I’m also posting this as a simple setup for the community and people out there struggling to look up a Live Streaming with DASH Tutorial.

    Setup :
    OS : Ubuntu 16.04
    Encoding Tools :
    ffmpeg : To record a livestream using a desktop webcam in mpeg2 format
    avconv : To convert mpeg2 to mpeg4 file format
    MP4Box : To DASH it, i.e. produce the .mpd, some conf files, seg_init and the segments
    Dashjs : Reference Client 2.4.1
    Server : Apache

    Process :
    I’ve written 3 bash scripts, with basically an infinite while loop in them, with ffmpeg , avconv and the MP4Box commands, one in each of them. I first run the ffmpeg script that records a video using the desktop webcam and then I run the avconv script that kills the ffmpeg command and converts the file from mpeg2 format to mpeg 4 format. Now since the ffmpeg command is in an infinite while loop, it restarts. Then I run the MP4Box command that DASH-es the avconv commands’ output. Then everything is sent to the DASHjs client and pretty much the whole setup gets repeated every 5 seconds.

    Commands :


    ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video0 -f mpegts -codec:v mpeg1video -s 640x480 -b:v 1000k -bf 0 livestream  

    avconv -i livestream out.mp4  

    MP4Box command in a loop, MP4Box -dash-live 4000 -fps 24 -frag 6000 -profile dashavc264:live -dynamic -mpd-refresh 5000 -dash-ctx dashtest.txt -time-shift -1 -inter 0 -segment-name output-seg -bs-switching no out.mp4  

    Problem :
    The ffmpeg sends a chunk of 5 seconds (that’s due to the sleep command in my avconv bash script) and the MP4Box reads that 5 second chunk and loops that chunk. So when the next chunk comes in, newer segments are produced, but the player is still playing the older segments, typically just the very first few segments in a loop.

    Questions :
    1) Am I missing out on some core concept here ? Are the commands and their respective attributes with the right parameters and the right values ?
    2) I believe there should be a way to pipeline these processes in a better manner, should I be looking into writing a python script maybe ?

    Happy to provide more info ! Cheers