Recherche avancée

Médias (3)

Mot : - Tags -/spip

Autres articles (35)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (6295)

  • FFmpeg "no frame !" encoding error

    9 juillet 2015, par oleg.semen

    I’m trying to compress video by scaling it. Here is how
    But I’m getting this error :

    D/FFMpeg﹕ progress[h264 @ 0x42124970] no frame!
    D/FFMpeg﹕ progress[aac @ 0x42122fe0] Input buffer exhausted before END element found

    here is whole log :

    Loading FFmpeg for armv7-neon CPU
    start
    Running publishing updates method
    progressWARNING: linker: /data/data/com.example.ffmpeg/files/ffmpeg has text relocations. This is wasting memory and is a security risk. Please fix.
    progressffmpeg version n2.4.2 Copyright (c) 2000-2014 the FFmpeg developers
    progress  built on Oct  7 2014 15:08:46 with gcc 4.8 (GCC)
    progress  configuration: --target-os=linux --cross-prefix=/home/sb/Source-Code/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi- --arch=arm --cpu=cortex-a8 --enable-runtime-cpudetect --sysroot=/home/sb/Source-Code/ffmpeg-android/toolchain-android/sysroot --enable-pic --enable-libx264 --enable-libass --enable-libfreetype --enable-libfribidi --enable-fontconfig --enable-pthreads --disable-debug --disable-ffserver --enable-version3 --enable-hardcoded-tables --disable-ffplay --disable-ffprobe --enable-gpl --enable-yasm --disable-doc --disable-shared --enable-static --pkg-config=/home/sb/Source-Code/ffmpeg-android/ffmpeg-pkg-config --prefix=/home/sb/Source-Code/ffmpeg-android/build/armeabi-v7a-neon --extra-cflags='-I/home/sb/Source-Code/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all -mfpu=neon' --extra-ldflags='-L/home/sb/Source-Code/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie' --extra-libs='-lpng -lexpat -lm' --extra-cxxflags=
    progress  libavutil      54.  7.100 / 54.  7.100
    progress  libavcodec     56.  1.100 / 56.  1.100
    progress  libavformat    56.  4.101 / 56.  4.101
    progress  libavdevice    56.  0.100 / 56.  0.100
    progress  libavfilter     5.  1.100 /  5.  1.100
    progress  libswscale      3.  0.100 /  3.  0.100
    progress  libswresample   1.  1.100 /  1.  1.100
    progress  libpostproc    53.  0.100 / 53.  0.100
    progress[h264 @ 0x42124970] no frame!
    progress[aac @ 0x42122fe0] Input buffer exhausted before END element found
    progressInput #0, mov,mp4,m4a,3gp,3g2,mj2, from '/storage/emulated/0/Movies/Instagram/VID_37551017_035953.mp4':
    progress  Metadata:
    progress    major_brand     : isom
    progress    minor_version   : 0
    progress    compatible_brands: isom3gp4
    progress    creation_time   : 2015-06-19 09:03:19
    progress  Duration: 00:00:03.20, start: 0.000000, bitrate: 2975 kb/s
    progress    Stream #0:0(eng): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 640x640, 2911 kb/s, 15.22 fps, 14.92 tbr, 90k tbn, 180k tbc (default)
    progress    Metadata:
    progress      creation_time   : 2015-06-19 09:03:19
    progress      handler_name    : VideoHandle
    progress    Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 101 kb/s (default)
    progress    Metadata:
    progress      creation_time   : 2015-06-19 09:03:19
    progress      handler_name    : SoundHandle

    Did I miss something in config ?
    Thanks.

  • Setting up RTP on Nginx

    2 février 2021, par Swap

    I'm trying to use Janus Media Server to relay WebRTC streams to a particular RTP host/port, from where ffmpeg can pick it up as an input and convert it further to an rtmp stream, which can then be used to broadcast to various social media platforms (such as, YouTube, Twitch, Facebook, etc.)

    


    My inspiration for this has been the following blog - https://www.meetecho.com/blog/firefox-webrtc-youtube-kinda/

    


    Specifically, I'm trying to replicate the following architecture -

    


    architecture

    


    And Janus, as per their documentation, has a very neat API for doing it -

    


    {&#xA;    "request" : "rtp_forward",&#xA;    "room" : <unique numeric="numeric" of="of" the="the" room="room" publisher="publisher" is="is" in="in">,&#xA;    "publisher_id" : <unique numeric="numeric" of="of" the="the" publisher="publisher" to="to" relay="relay" externally="externally">,&#xA;    "host" : "<host address="address" to="to" forward="forward" the="the" rtp="rtp" and="and" packets="packets">",&#xA;    "host_family" : "",&#xA;    "audio_port" : <port to="to" forward="forward" the="the" audio="audio" rtp="rtp" packets="packets">,&#xA;    "audio_ssrc" : <audio ssrc="ssrc" to="to" use="use" when="when" optional="optional">,&#xA;    "audio_pt" : <audio payload="payload" type="type" to="to" use="use" when="when" optional="optional">,&#xA;    "audio_rtcp_port" : <port to="to" contact="contact" receive="receive" audio="audio" rtcp="rtcp" feedback="feedback" from="from" the="the" and="and" currently="currently" unused="unused" for="for">,&#xA;    "video_port" : <port to="to" forward="forward" the="the" video="video" rtp="rtp" packets="packets">,&#xA;    "video_ssrc" : <video ssrc="ssrc" to="to" use="use" when="when" optional="optional">,&#xA;    "video_pt" : <video payload="payload" type="type" to="to" use="use" when="when" optional="optional">,&#xA;    "video_rtcp_port" : <port to="to" contact="contact" receive="receive" video="video" rtcp="rtcp" feedback="feedback" from="from" the="the" optional="optional">,&#xA;    "simulcast" : ,&#xA;    "video_port_2" : <if simulcasting="simulcasting" and="and" forwarding="forwarding" each="each" port="port" to="to" forward="forward" the="the" video="video" rtp="rtp" packets="packets" from="from" second="second" substream="substream"></if>layer to>,&#xA;    "video_ssrc_2" : <if simulcasting="simulcasting" and="and" forwarding="forwarding" each="each" video="video" ssrc="ssrc" to="to" use="use" the="the" second="second" substream="substream"></if>layer; optional>,&#xA;    "video_pt_2" : <if simulcasting="simulcasting" and="and" forwarding="forwarding" each="each" video="video" payload="payload" type="type" to="to" use="use" the="the" second="second" substream="substream"></if>layer; optional>,&#xA;    "video_port_3" : <if simulcasting="simulcasting" and="and" forwarding="forwarding" each="each" port="port" to="to" forward="forward" the="the" video="video" rtp="rtp" packets="packets" from="from" third="third" substream="substream"></if>layer to>,&#xA;    "video_ssrc_3" : <if simulcasting="simulcasting" and="and" forwarding="forwarding" each="each" video="video" ssrc="ssrc" to="to" use="use" the="the" third="third" substream="substream"></if>layer; optional>,&#xA;    "video_pt_3" : <if simulcasting="simulcasting" and="and" forwarding="forwarding" each="each" video="video" payload="payload" type="type" to="to" use="use" the="the" third="third" substream="substream"></if>layer; optional>,&#xA;    "data_port" : <port to="to" forward="forward" the="the" messages="messages">,&#xA;    "srtp_suite" : <length of="of" authentication="authentication" tag="tag" or="or" optional="optional">,&#xA;    "srtp_crypto" : "<key to="to" use="use" as="as" crypto="crypto" encoded="encoded" key="key" in="in" optional="optional">"&#xA;}&#xA;</key></length></port></port></video></video></port></port></audio></audio></port></host></unique></unique>

    &#xA;

    For this, I've setup a Nginx server, where I've also installed Janus and everything's been running smoothly so far. But I'm quite clueless as to how to setup my Nginx server so that it accepts RTP connections (which will be forwarded as RTMP using ffmpeg).

    &#xA;

    Please guide me to any relevant resources that would help me achieve this. Thanks in advance !

    &#xA;

  • WebVTT Discussions at FOMS

    1er janvier 2014, par silvia

    At the recent FOMS (Foundations of Open Media Software and Standards) Developer Workshop, we had a massive focus on WebVTT and the state of its feature set. You will find links to summaries of the individual discussions in the FOMS Schedule page. Here are some of the key results I went away with.

    1. WebVTT Regions

    The key driving force for improvements to WebVTT continues to be the accurate representation of CEA608/708 captioning. As part of that drive, we’ve introduced regions (the CEA708 “window” concept) to WebVTT. WebVTT regions satisfy multiple requirements of CEA608/708 captions :

    1. support for rollup captions
    2. support for background color and border color on a group of cues independent of the background color of the individual cue
    3. possibility to move a group of cues from one location on screen to a different
    4. support to specify an anchor point and a growth direction for cues when their text size changes
    5. support for specifying a fixed number of lines to be rendered
    6. possibility to specify which region is rendered in front of which other one when regions overlap

    While WebVTT regions enable us to satisfy all of the above points, the specification isn’t actually complete yet and some of the above needs aren’t satisfied yet.

    We have an open bug to move a region elsewhere. A first discussion at FOMS seemed to to indicate that we’ll have to add syntax for updating a region at a particular time and thus give region definitions a way to be valid only for a certain time frame. I can imagine that the region definitions that we have in the header of the WebVTT file now would have an implicitly defined time frame from the start to the end of the file, but can be overruled by a re-definition anywhere within the WebVTT file. That redefinition needs to provide a start and end time.

    We registered a bug to add specifying the width and height of regions (and possibly of cues) by em (i.e. by multiples of the largest character in a font). This should allow us to have the region grow/shrink around the region anchor point with a change of font size by script or a user. em specifications should also be applied to cues – that matches the column count of CEA708/608 better.

    When regions overlap, the original region extension spec already suggested a “layer” cue setting. It will be easy to add it.

    Another change that we will ultimately need is the “scroll” setting : we will need to introduce support for scrolling text down or from left-to-right or right-to-left, e.g. vertical scrolling text seems to be used in some Chinese caption use cases.

    2. Unify Rendering Approach

    The introduction of regions created a second code path in the rendering spec with some duplication. At FOMS we discussed if it was possible to unify that. The suggestion is to render all cues into a region. Those that are not part of a region would be rendered into an anonymous region that covers the complete viewport. There may be some consequences to this, e.g. cue settings should be usable across all cues, no matter whether or not part of a region, and avoiding cue overlap may need to be done within regions.

    Here’s a rough outline of the path of the new rendering algorithm :

    (1) Render the regions :

    Specified Region Anonymous Region
    Render values as given : Render following values :
    • width
    • lines
    • regionanchor
    • viewportanchor
    • scroll
    • 100%
    • videoheight/lineheight
    • 0,0
    • 0,0
    • none

    (2) Render the cues :

    • Create a cue box and put it in its region (anonymous if none given).
    • Calculate position & size of cue box from cue settings (position, line, size).
    • Calculate position of cue text inside cue box from remaining cue settings (vertical, align).

    3. Vertical Features

    WebVTT includes vertical rendering, both right-to-left and left-to-right. However, regions are not defined for vertical. Eventually, we’re going to have to look at the vertical features of WebVTT with more details and figure out whether the spec is working for them and what real-world requirements we have missed. We hope we can get some help from users in countries where vertically rendered captions/subtitles are the norm.

    4. Best Practices

    Some of he WebVTT users at FOMS suggested it would be advantageous to start a list of “best practices” for how to author captions with WebVTT. Example recommendations are :

    • Use line numbers only to position cues from top or bottom of viewport. Don’t use otherwise.
    • Note that when the user increases the fontsize in rollup captions and thus introduces new line breaks, your cues will roll by faster because the number of lines of a rollup is fixed.
    • Make sure to use &lrm ; and &rlm ; UTF-8 markers to control the directionality of your text.

    It would be nice if somebody started such a document.

    5. Non-caption use cases

    Instead of continuing to look back and improve our support of captions/subtitles in WebVTT, one session at FOMS also went ahead and looked forward to other use cases. The following requirements came out of this :

    5.1 Preview Thumbnails

    A common use case for timed data is the use of preview thumbnails on the navigation bar of videos. A native implementation of preview thumbnails would allow crawlers and search engines to have a standardised way of extracting timed images for media files, so introduction of a new @kind value “thumbnails” was suggested.

    The content of a “thumbnails” cue could be any of :

    • an image URL
    • a sprite URL to a single image
    • a spatial & temporal media fragment URL to a media resource
    • base64 encoded image (data URI)
    • an iframe offset to the media resource

    The suggestion is to allow anything that would work in a img @src attribute as value in a cue of @kind=”thumbnails”. Responsive images might also be useful for a track of @kind=”thumbnails”. It may even be possible to define an inband thumbnail track based on the track of @kind=”thumbnails”. Such cues should also work in the JavaScript track API.

    5.2 Chapter markers

    There is interest to put richer content than just a chapter title into chapter cues. Often, chapters consist of a title, text and and image. The text is not so important, but the image is used almost everywhere that chapters are used. There may be a need to extend chapter cue content with images, similar to what a @kind=”thumbnails” track offers.

    The conclusion that we arrived at was that we need to make @kind=”thumbnails” work first and then look at using the learnings from that to extend @kind=”chapters”.

    5.3 Inband tracks for live video

    A difficult topic was opened with the question of how to transport text tracks in live video. In live captioning, end times are never created for cues, but are implied by the start time of the next cue. This is a use case that hasn’t been addressed in HTML5/WebVTT yet. An old proposal to allow a special end time value of “NEXT” was discussed and recommended for adoption. Also, there was support for the spec change that stops blocking loading VTT until all cues have been loaded.

    5.4 Cross-domain VTT loading

    A brief discussion centered around the fact that the spec disallows cross-domain loading of WebVTT files, but that no browser implements this. This needs to be discussion at the HTML WG level.

    6. Regions in live captioning

    The final topic that we discussed was how we could provide support for regions in live captioning.

    • The currently active region definitions will need to be come part of every header of every VTT file segment that HLS uses, so it’s available in case the cues in the segment file reference it.
    • “NEXT” in end time markers would make authoring of live captioned VTT files easier.
    • If the application wants to use 1 word at a time and doesn’t want to delay sending the word until the full cue is authored (e.g. in a Hangout type environment), we will need to introduce the concept of “cue continuation markers”, so we know that a cue could be extended with the next VTT file fragment.

    This is an extensive and impressive amount of discussion around WebVTT and a lot of new work to be performed in the future. I’m very grateful for all the people who have contributed to these discussions at FOMS and will hopefully continue to help get the specifications right.