Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (66)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (10808)

  • Neutral net or neutered

    4 juin 2013, par Mans — Law and liberty

    In recent weeks, a number of high-profile events, in the UK and elsewhere, have been quickly seized upon to promote a variety of schemes for monitoring or filtering Internet access. These proposals, despite their good intentions of protecting children or fighting terrorism, pose a serious threat to fundamental liberties. Although at a glance the ideas may seem like a reasonable price to pay for the prevention of some truly hideous crimes, there is more than first meets the eye. Internet regulation in any form whatsoever is the thin end of a wedge at whose other end we find severely restricted freedom of expression of the kind usually associated with oppressive dictatorships. Where the Internet was once a novelty, it now forms an integrated part of modern society ; regulating the Internet means regulating our lives.

    Terrorism

    Following the brutal murder of British soldier Lee Rigby in Woolwich, attempts were made in the UK to revive the controversial Communications Data Bill, also dubbed the snooper’s charter. The bill would give police and security services unfettered access to details (excluding content) of all digital communication in the UK without needing so much as a warrant.

    The powers afforded by the snooper’s charter would, the argument goes, enable police to prevent crimes such as the one witnessed in Woolwich. True or not, the proposal would, if implemented, also bring about infrastructure for snooping on anyone at any time for any purpose. Once available, the temptation may become strong to extend, little by little, the legal use of these abilities to cover ever more everyday activities, all in the name of crime prevention, of course.

    In the emotional aftermath of a gruesome act, anything with the promise of preventing it happening again may seem like a good idea. At times like these it is important, more than ever, to remain rational and carefully consider all the potential consequences of legislation, not only the intended ones.

    Hate speech

    Hand in hand with terrorism goes hate speech, preachings designed to inspire violence against people of some singled-out nation, race, or other group. Naturally, hate speech is often to be found on the Internet, where it can reach large audiences while the author remains relatively protected. Naturally, we would prefer for it not to exist.

    To fulfil the utopian desire of a clean Internet, some advocate mandatory filtering by Internet service providers and search engines to remove this unwanted content. Exactly how such censoring might be implemented is however rarely dwelt upon, much less the consequences inadvertent blocking of innocent material might have.

    Pornography

    Another common target of calls for filtering is pornography. While few object to the blocking of child pornography, at least in principle, the debate runs hotter when it comes to the legal variety. Pornography, it is claimed, promotes violence towards women and is immoral or generally offensive. As such it ought to be blocked in the name of the greater good.

    The conviction last week of paedophile Mark Bridger for the abduction and murder of five-year-old April Jones renewed the debate about filtering of pornography in the UK ; his laptop was found to contain child pornography. John Carr of the UK government’s Council on Child Internet Safety went so far as suggesting a default blocking of all pornography, access being granted to an Internet user only once he or she had registered with some unspecified entity. Registering people wishing only to access perfectly legal material is not something we do in a democracy.

    The reality is that Google and other major search engines already remove illegal images from search results and report them to the appropriate authorities. In the UK, the Internet Watch Foundation, a non-government organisation, maintains a blacklist of what it deems ‘potentially criminal’ content, and many Internet service providers block access based on this list.

    While well-intentioned, the IWF and its blacklist should raise some concerns. Firstly, a vigilante organisation operating in secret and with no government oversight acting as the nation’s morality police has serious implications for freedom of speech. Secondly, the blocks imposed are sometimes more far-reaching than intended. In one incident, an attempt to block the cover image of the Scorpions album Virgin Killer hosted by Wikipedia (in itself a dubious decision) rendered the entire related article inaccessible as well as interfered with editing.

    Net neutrality

    Content filtering, or more precisely the lack thereof, is central to the concept of net neutrality. Usually discussed in the context of Internet service providers, this is the principle that the user should have equal, unfiltered access to all content. As a consequence, ISPs should not be held responsible for the content they deliver. Compare this to how the postal system works.

    The current debate shows that the principle of net neutrality is important not only at the ISP level, but should also include providers of essential services on the Internet. This means search engines should not be responsible for or be required to filter results, email hosts should not be required to scan users’ messages, and so on. No mandatory censoring can be effective without infringing the essential liberties of freedom of speech and press.

    Social networks operate in a less well-defined space. They are clearly not part of the essential Internet infrastructure, and they require that users sign up and agree to their terms and conditions. Because of this, they can include restrictions that would be unacceptable for the Internet as a whole. At the same time, social networks are growing in importance as means of communication between people, and as such they have a moral obligation to act fairly and apply their rules in a transparent manner.

    Facebook was recently under fire, accused of not taking sufficient measures to curb ‘hate speech,’ particularly against women. Eventually they pledged to review their policies and methods, and reducing the proliferation of such content will surely make the web a better place. Nevertheless, one must ask how Facebook (or another social network) might react to similar pressure from, say, a religious group demanding removal of ‘blasphemous’ content. What about demands from a foreign government ? Only yesterday, the Turkish prime minister Erdogan branded Twitter ‘a plague’ in a TV interview.

    Rather than impose upon Internet companies the burden of law enforcement, we should provide them the latitude to set their own policies as well as the legal confidence to stand firm in the face of unreasonable demands. The usual market forces will promote those acting responsibly.

    Further reading

  • Transcoding Modern Formats

    17 août 2014

    I’ve noticed that this blog still gets a decent amount of traffic, particularly to some of the older articles about transcoding. Since I’ve been working on a tool in this space recently, I thought I’d write something up in case it helps folks unravel how to think about transcoding these days.

    The tool I’ve been working on is EditReady, a transcoding app for the Mac. But why do you want to transcode in the first place ?

    Dailies

    After a day of shooting, there are a lot of people who need to see the footage from the day. Most of these folks aren’t equipped with editing suites or viewing stations - they want to view footage on their desktop or mobile device. That can be a problem if you’re shooting ProRes or similar.

    Converting ProRes, DNxHD or MPEG2 footage with EditReady to H.264 is fast and easy. With bulk metadata editing and custom file naming, the management of all the files from the set becomes simpler and more trackable.

    One common workflow would be to drop all the footage from a given shot into EditReady. Use the "set metadata for all" command to attach a consistent reel name to all of the clips. Do some quick spot-checks on the footage using the built in player to make sure it’s what you expect. Use the filename builder to tag all the footage with the reel name and the file creation date. Then, select the H.264 preset and hit convert. Now anyone who needs the footage can easily take the proxies with them on the go, without needing special codecs or players, and regardless of whether they’re working on a PC, a Mac, or even a mobile device.

    If your production is being shot in the Log space, you can use the LUT feature in EditReady to give your viewers a more traditional "video levels" daily. Just load a basic Log to Video Levels LUT for the batch, and your converted files will more closely resemble graded footage.

    Mezzanine Formats

    Even though many modern post production tools can work natively with H.264 from a GoPro or iPhone, there are a variety of downsides to that type of workflow. First and foremost is performance. When you’re working with H.264 in an editor or color correction tool, your computer has to constantly work to decompress the H.264 footage. Those are CPU cycles that aren’t being spent generating effects, responding to user interface clicks, or drawing your previews. Even apps that endeavor to support H.264 natively often get bogged down, or have trouble with all of the "flavors" of H.264 that are in use. For example, mixing and matching H.264 from a GoPro with H.264 from a mobile phone often leads to hiccups or instability.

    By using EditReady to batch transcode all of your footage to a format like ProRes or DNxHD, you get great performance throughout your post production pipeline, and more importantly, you get consistent performance. Since you’ll generally be exporting these formats from other parts of your pipeline as well - getting ProRes effects shots for example - you don’t have to worry about mix-and-match problems cropping up late in the production process either.

    Just like with dailies, the ability to apply bulk or custom metadata to your footage during your initial ingest also makes management easier for the rest of your production. It also makes your final output faster - transcoding from H.264 to another format is generally slower than transcoding from a mezzanine format. Nothing takes the fun out of finishing a project like watching an "exporting" bar endlessly creep along.

    Modernization

    The video industry has gone through a lot of digital formats over the last 20 years. As Mac OS X has been upgraded over the years, it’s gotten harder to play some of those old formats. There’s a lot of irreplaceable footage stored in formats like Sorensen Video, Apple Intermediate Codec, or Apple Animation. It’s important that this footage be moved to a modern format like ProRes or H.264 before it becomes totally unplayable by modern computers. Because EditReady contains a robust, flexible backend with legacy support, you can bring this footage in, select a modern format, and click convert. Back when I started this blog, we were mostly talking about DV and HDV, with a bit of Apple Intermediate Codec mixed in. If you’ve still got footage like that around, it’s time to bring it forward !

    Output

    Finally, the powerful H.264 transcoding pipeline in EditReady means you generate beautiful deliverable H.264 more rapidly than ever. Just drop in your final, edited ProRes, DNxHD, or even uncompressed footage and generate a high quality H.264 for delivery. It’s never been this easy !

    See for yourself

    We released a free trial of EditReady so you can give it a shot yourself. Or drop me a line if you have questions.

  • First/single frame encoded with ffmpeg/libavcodec library cannot be immediately decoded

    11 septembre 2023, par Marek Kijo

    I'm using libavcodec library and h264 codec to prepare the video stream on one end, transmit the encoded frames to the other PC and there decode it.

    


    What I noticed after receiving very first packet (first encoded video frame) and feeding decoder with it, it is not possible to decode that frame. Only when I receive another frame the first one can be decoded but 'current' one not. So in the end I have constantly one frame delay on the decoder side.

    


    I was trying different presets (focusing rather on 'ultrafast'), also 'zerolatency' tune, also whole variety of bit_rate values of AVCodecContext.

    


    I also tried to flush (with nullptr packet) after injecting first frame data, just to check if it is maybe because of some internal buffers optimization - the frame still not decoded.
Experimenting with other codecs (like mpeg4) gives even worse 'dalay' in number of frames to the point when when first frames can become decodable.

    


    Is it normal, unavoidable because of some internal mechanisms ? Otherwise how I can achieve real zero latency.

    


    Supplementary setup information :

    


      

    • max_b_frames set to 0 (higher value gives even more delay)
    • 


    • pix_fmt set to AV_PIX_FMT_YUV420P
    • 


    


    edit :

    


    Answering some comment question :

    


    


    (1) What is the decoder (or playback system) ?

    


    


    Custom decoder written using libavcodec, the decoded frames are later displayed on screen by OpenGL.

    


      

    • initialization :
    • 


    


    parser_ = av_parser_init(AV_CODEC_ID_H264);
codec_ = avcodec_find_decoder(AV_CODEC_ID_H264);
context_ = avcodec_alloc_context3(codec_);
context_->width = 1024;
context_->height = 768;
context_->thread_count = 1;
if ((codec_->capabilities & AV_CODEC_CAP_TRUNCATED) == 0)
{
  context_->flags |= AV_CODEC_FLAG_TRUNCATED;
}
if (avcodec_open2(context_, codec_, nullptr) < 0)
{
  throw std::runtime_error{"avcodec_open2 failed"};
}
avcodec_flush_buffers(context_);


    


      

    • then player periodically calls of the method of decoder that suppose to check if the another frame can be retrieved and displayed :
    • 


    


    auto result = avcodec_receive_frame(context_, frame_);
if (!buffer_.empty())
{  // upload another packet for decoding
  int used;
  if (upload_package(buffer_.data(), buffer_.size(), &used))
  {
    buffer_.erase(buffer_.begin(), buffer_.begin() + used);
  }
}
if (result == AVERROR(EAGAIN) || result == AVERROR_EOF || result < 0)
{
  return false;
}
yuv_to_rgb();
return true;


    


    boolean return value informs if the decoding succeeded, and every time the buffer where the incomming packets are stored is checked and uploaded to libavcodec decoder

    


      

    • and that is how the method that uploads the buffer looks like :
    • 


    


    bool upload_package(const void* data, const std::size_t size, int* used)&#xA;{&#xA;  auto result = av_parser_parse2(parser_, context_, &amp;packet_->data, &amp;packet_->size, reinterpret_cast<const>(data), size, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);&#xA;  if (result &lt; 0)&#xA;  {&#xA;    return false;&#xA;  }&#xA;  *used = result;&#xA;  if (packet_->size != 0)&#xA;  {&#xA;    result = avcodec_send_packet(context_, packet_);&#xA;    if (result &lt; 0)&#xA;    {&#xA;      return false;&#xA;    }&#xA;  }&#xA;  return true;&#xA;}&#xA;</const>

    &#xA;

    &#xA;

    (2) If possible, save each one as a .bin file and then share the links with us for testing.

    &#xA;

    &#xA;

    I will try to figure out something...

    &#xA;

    &#xA;

    (3) Show example C++ code of your encoder settings for H264...

    &#xA;

    &#xA;

      &#xA;
    • initialization :
    • &#xA;

    &#xA;

    codec_ = avcodec_find_encoder(AV_CODEC_ID_H264);&#xA;context_ = avcodec_alloc_context3(codec_);&#xA;&#xA;context_->bit_rate = 1048576; // 1xMbit;&#xA;context_->width = 1024;&#xA;context_->height = 768;&#xA;context_->time_base = {1, 30}; // 30 fps&#xA;context_->pix_fmt = AV_PIX_FMT_YUV420P;&#xA;context_->thread_count = 1;&#xA;&#xA;av_opt_set(context_->priv_data, "preset", "ultrafast", 0);&#xA;av_opt_set(context_->priv_data, "tune", "zerolatency", 0);&#xA;avcodec_open2(context_, codec_, nullptr);&#xA;&#xA;frame_->format = AV_PIX_FMT_YUV420P;&#xA;frame_->width = 1024;&#xA;frame_->height = 768;&#xA;av_image_alloc(frame_->data, frame_->linesize, 1024, 768, AV_PIX_FMT_YUV420P, 32);&#xA;

    &#xA;

      &#xA;
    • frame encoding :
    • &#xA;

    &#xA;

    rgb_to_yuv();&#xA;frame_->pts = frame_num_&#x2B;&#x2B;;&#xA;auto result = avcodec_send_frame(context_, frame_);&#xA;while (result >= 0)&#xA;{&#xA;  result = avcodec_receive_packet(context_, packet_);&#xA;  if (result == AVERROR(EAGAIN) || result == AVERROR_EOF)&#xA;  {&#xA;    return;&#xA;  }&#xA;  else if (result &lt; 0)&#xA;  {&#xA;    throw std::runtime_error{"avcodec_receive_packet failed"};&#xA;  }&#xA;  // here the packet is send to the decoder, the whole packet is stored on the mentioned before buffer_ and uploaded with avcodec_send_packet&#xA;  // I can also add that the whole buffer/packet us uploaded at once&#xA;  stream_video_data(packet_->data, packet_->size); &#xA;}&#xA;av_packet_unref(packet_);&#xA;}&#xA;

    &#xA;

    edit2 :

    &#xA;

    I think I figured out the issue that I had.

    &#xA;

    For every incoming data packet (encoded frame) I was calling first av_parser_parse2, and then I was sending the data through avcodec_send_packet.&#xA;And I was not recalling that procedure having empty buffer_, so for the first frame data the av_parser_parse2 was never called after uploading it through avcodec_send_packet, for the second frame it was called and the first frame was parsed, so it could be properly decoded, but for that (second) frame the parse2 was also not called, and so on ...

    &#xA;

    So the issue in my case was wrong sequence of av_parser_parse2 and avcodec_send_packet to handle the encoded data.

    &#xA;