Recherche avancée

Médias (3)

Mot : - Tags -/pdf

Autres articles (105)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

Sur d’autres sites (8340)

  • Trying to tonemap 14-bit grayscale video

    12 janvier 2018, par Trevor

    I’m trying to generate h.264 video from raw 2-byte gray video (14-bit range encoded in 16-bit values). I can do something like :

    ffmpeg -f rawvideo -pix_fmt gray16le -s:v 1280x720 -r 60 -i input.raw -c:v libx264 output.mp4

    And I get video but it’s pretty dark, not sure if it’s clipping, doing a linear remap, or storing the 16-bit data and VLC is doing the remap. ffprobe is reporting Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuvj444p(pc), 1280x720, 108 kb/s, 60 fps, 60 tbr, 15360 tbn, 120 tbc

    I was figuring I’d use the tonemap filter to make a better mapping. I added a filter before the output file with -vf.

    • tonemap=hable errors Impossible to convert between the formats supported by the filter 'graph 0 input from stream 0:0' and the filter 'auto_scaler_0'
    • zscale=transfer=linear,tonemap=hable errors Impossible to convert between the formats supported by the filter 'Parsed_tonemap_1' and the filter 'auto_scaler_1'
    • zscale=transfer=linear,tonemap=hable,zscale=transfer=bt709,format=yuvj444p errors code 3074: no path between colorspaces

    I’m not sure where to proceed from here...

  • hwcontext_vdpau : implement av_hwdevice_get_hwframe_constraints()

    13 janvier 2018, par wm4
    hwcontext_vdpau : implement av_hwdevice_get_hwframe_constraints()
    

    In addition, this does not allow creating frames contexts with sw_format
    for which no known transfer formats exist. In theory, we should check
    whether the chroma format (i.e. the sw_format) is supported at all by
    the vdpau driver, but checking for transfer formats has the same effect.

    Note that the pre-existing code adds 1 to priv->nb_pix_fmts[i] for
    unknown reason, and some checks need to account for that to check for
    empty lists. They are not off-by-one errors.

    • [DH] libavutil/hwcontext_vdpau.c
  • Live video stream on server (PC) from images sent by robot through UDP

    3 février 2018, par Richard Knop

    Hmm. I found this which seems promising :

    http://sourceforge.net/projects/mjpg-streamer/


    Ok. I will try to explain what I am trying to do clearly and in much detail.

    I have a small humanoid robot with camera and wifi stick (this is the robot). The robot’s wifi stick average wifi transfer rate is 1769KB/s. The robot has 500Mhz CPU and 256MB RAM so it is not enough for any serious computations (moreover there are already couple modules running on the robot for motion, vision, sonar, speech etc).

    I have a PC from which I control the robot. I am trying to have the robot walk around the room and see a live stream video of what the robot sees in the PC.

    What I already have working. The robot is walking as I want him to do and taking images with the camera. The images are being sent through UDP protocol to the PC where I am receiving them (I have verified this by saving the incoming images on the disk).

    The camera returns images which are 640 x 480 px in YUV442 colorspace. I am sending the images with lossy compression (JPEG) because I am trying to get the best possible FPS on the PC. I am doing the compression to JPEG on the robot with PIL library.

    My questions :

    1. Could somebody please give me some ideas about how to convert the incoming JPEG images to a live video stream ? I understand that I will need some video encoder for that. Which video encoder do you recommend ? FFMPEG or something else ? I am very new to video streaming so I want to know what is best for this task. I’d prefer to use Python to write this so I would prefer some video encoder or library which has Python API. But I guess if the library has some good command line API it doesn’t have to be in Python.

    2. What is the best FPS I could get out from this ? Given the 1769KB/s average wifi transfer rate and the dimensions of the images ? Should I use different compression than JPEG ?

    3. I will be happy to see any code examples. Links to articles explaining how to do this would be fine, too.

    Some code samples. Here is how I am sending JPEG images from robot to the PC (shortened simplified snippet). This runs on the robot :

    # lots of code here

    UDPSock = socket(AF_INET,SOCK_DGRAM)

     while 1:
       image = camProxy.getImageLocal(nameId)
       size = (image[0], image[1])
       data = image[6]
       im = Image.fromstring("YCbCr", size, data)
       s = StringIO.StringIO()
       im.save(s, "JPEG")

       UDPSock.sendto(s.getvalue(), addr)

       camProxy.releaseImage(nameId)

     UDPSock.close()

     # lots of code here

    Here is how I am receiving the images on the PC. This runs on the PC :

     # lots of code here

     UDPSock = socket(AF_INET,SOCK_DGRAM)
     UDPSock.bind(addr)

     while 1:
       data, addr = UDPSock.recvfrom(buf)
       # here I need to create a stream from the data
       # which contains JPEG image

     UDPSock.close()

     # lots of code here