
Recherche avancée
Autres articles (33)
-
Qu’est ce qu’un éditorial
21 juin 2013, parEcrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
Vous pouvez personnaliser le formulaire de création d’un éditorial.
Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...) -
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...) -
MediaSPIP en mode privé (Intranet)
17 septembre 2013, parÀ partir de la version 0.3, un canal de MediaSPIP peut devenir privé, bloqué à toute personne non identifiée grâce au plugin "Intranet/extranet".
Le plugin Intranet/extranet, lorsqu’il est activé, permet de bloquer l’accès au canal à tout visiteur non identifié, l’empêchant d’accéder au contenu en le redirigeant systématiquement vers le formulaire d’identification.
Ce système peut être particulièrement utile pour certaines utilisations comme : Atelier de travail avec des enfants dont le contenu ne doit pas (...)
Sur d’autres sites (3320)
-
My SBC Collection
31 décembre 2023, par Multimedia Mike — GeneralLike many computer nerds in the last decade, I have accumulated more than a few single-board computers, or “SBCs”, which are small computers based around a system-on-a-chip (SoC) that nearly always features an ARM CPU at its core. Surprisingly few of these units are Raspberry Pi units, though that brand has come to exemplify and dominate the product category.
Also, as is the case for many computer nerds, most of these SBCs lay fallow for years at a time. Equipped with an inexpensive lightbox that I procured in the last year, I decided I could at least create glamour shots of various units and catalog them in a blog post.
While Raspberry Pi still enjoys the most mindshare far and away, and while I do have a few Raspberry Pi units in my inventory, I have always been a bigger fan of the ODROID brand, which works with convenient importers around the world (in the USA, I can vouch for Ameridroid, to whom I’ve forked over a fair amount of cash for these computing toys).
As mentioned, Raspberry Pi undisputedly has the most mindshare of all these SBC brands and I often wonder why… and then I immediately remind myself that it has the biggest ecosystem, and has a variety of turnkey projects and applications (such as Pi-hole and PiVPN) that promise a lower barrier to entry — as well as a slightly lower price point — than some of these other options. ODROID had a decent ecosystem for awhile, especially considering the monthly ODROID Magazine, though that ceased publication in July 2020. The Raspberry Pi and its variants were famously difficult to come by due to the global chip shortage from 2021-2023. Meanwhile, I had no trouble procuring these boards during the same timeframe.
So let’s delve into the collection…
Cubieboard
The Raspberry Pi came out in 2012 and by 2013 I was somewhat coveting one to hack on. Finally ! An accessible ARM platform to play with. I had heard of the BeagleBoard for years but never tried to get my hands on one. I was thinking about taking the plunge on a new Raspberry Pi, but a colleague told me I should skip that and go with this new hotness called the Cubieboard, based on an Allwinner SoC. The big value-add that this board had vs. a Raspberry Pi was that it had a SATA adapter. Although now that it has been a decade, it only now occurs to me to quander whether it was true SATA or a USB-to-SATA bridge. Looking it up now, I’m led to believe that the SoC supported the functionality natively.Anyway, I did get it up and running but never did much with it, thus setting the tone for future SBC endeavors. No photos because I gave it to another tech enthusiast years ago, whose SBC collection dwarfs my own.
ODROID-XU4
I can’t recall exactly when or how I first encountered the ODROID brand. I probably read about it on some enthusiast page or another circa 2014 and decided to try one out. I eventually acquired a total of 3 of these ODROID-XU4 units, each with a different case, 1 with a fan and 2 passively-cooled :This is based on the Samsung Exynos 5422 SoC, the same series as was used in their Note 3 phone released in 2013. It has been a fun chip to play with. The XU4 was also my first introduction to the eMMC storage solution that is commonly supported on the ODROID SBCs (alongside micro-SD). eMMC offers many benefits over SD in terms of read/write speed as well as well as longevity/write cycles. That’s getting less relevant these days, however, as more and more SBCs are being released with direct NVMe SSD support.
I had initially wanted to make a retro-gaming device built on this platform (see the handheld section later for more meditations on that). In support of this common hobbyist goal, there is this nifty case XU4 case which apes the aesthetic of the Nintendo N64 :
It even has a cool programmable LCD screen. Maybe one day I’ll find a use for it.
For awhile, one of these XU4 units (likely the noisy, fan-cooled one) was contributing results to the FFmpeg FATE system.
While it features gigabit ethernet and a USB3 port, I once tried to see if I could get 2 Gbps throughput with the unit using a USB3-gigabit dongle. I had curious results in that the total amount of traffic throughput could never exceed 1 Gbps across both interfaces. I.e., if 1 interface was dealing with 1 Gbps and the other interface tried to run at 1 Gbps, they would both only run at 500 Mbps. That remains a mystery to me since I don’t see that limitation with Intel chips.
Still, the XU4 has been useful for a variety of projects and prototyping over the years.
ODROID-HC2 NAS
I find that a lot of my fellow nerds massively overengineer their homelab NAS setups. I’ll explore this in a future post. For my part, people tend to find my homelab NAS solution slightly underengineered. This is the ODROID-HC2 (the “HC” stands for “Home Cloud”) :It has the same guts as the ODROID-XU4 except no video output and the USB3 function is leveraged for a SATA bridge. This allows you to plug a SATA hard drive directly into the unit :
Believe it or not, this has been my home NAS solution for something like 6 or 7 years now– I don’t clearly remember when I purchased it and put it into service.
But isn’t this sort of irresponsible ? What about a failure of the main drive ? That’s why I have an external drive connected for backing up the most important data via rsync :
The power consumption can’t be beat– Profiling for a few weeks of average usage worked out to 4.5 kWh for the ODROID-HC2… per month.
ODROID-C2
I was on a kick of ordering more SBCs at one point. This is the ODROID-C2, equipped with a 64-bit Amlogic SoC :I had this on the FATE farm for awhile, performing 64-bit ARM builds (vs. the XU4’s 32-bit builds). As memory serves, it was unreliable and would occasionally freeze up.
Here is a view of the eMMC storage through the bottom of the translucent case :
ODROID-N2+
Out of all my ODROID SBCs, this is the unit that I long to “get back to” the most– the ODROID-N2+ :Very capable unit that makes a great little desktop. I have some projects I want to develop using it so that it will force me to have a focused development environment.
Raspberry Pi
Eventually, I did break down and get a Raspberry Pi. I had a specific purpose in mind and, much to my surprise, I have stuck to it :I was using one of the ODROID-XU4 units as a VPN gateway. Eventually, I wanted to convert the XU4 to something else and I decided to run the VPN gateway as an appliance on the simplest device I could. So I procured this complete hand-me-down unit from eBay and went to work. This was also the first time I discovered the DietPi distribution and this box has been in service running Wireguard via PiVPN for many years.
I also have a Raspberry Pi 3B+ kicking around somewhere. I used it as a Steam Link device for awhile.
SOPINE + Baseboard
Also procured when I was on this “let’s buy random SBCs” kick. The Pine64 SOPINE is actually a compute module that comes in the form factor of a memory module.Back to using Allwinner SoCs. In order to make this thing useful, you need to place it in something. It’s possible to get a mini-ITX form factor board that can accommodate 7 of these modules. Before going to that extreme, there is this much simpler baseboard which can also use eMMC for storage.
I really need to find an appropriate case for this one as it currently performs its duty while sitting on an anti-static bag.
NanoPi NEO3
I enjoy running the DietPi distribution on many of these SBCs (as it’s developed not just for Raspberry Pi). I have also found their website to be a useful resource for discovering new SBCs. That’s how I found the NanoPi series and zeroed in on this NEO3 unit, sporting a Rockchip SoC, and photographed here with some American currency in order to illustrate its relative size :I often forget about this computer because it’s off in another room, just quietly performing its assigned duty.
MangoPi MQ-Pro
So far, I’ve heard of these fruits prepending the Greek letter pi for naming small computing products :- Raspberry – the O.G.
- Banana – seems to be popular for hobbyist router/switches
- Orange
- Atomic
- Nano
- Mango
Okay, so the AtomicPi and NanoPi names don’t really make sense considering the fruit convention.
Anyway, the newest entry is the MangoPi. These showed up on Ameridroid a few months ago. There are 2 variants : the MQ-Pro and the MQ-Quad. I picked one and rolled with it.
When it arrived, I unpacked it, assembled the pieces, downloaded a distro, tossed that on a micro-SD card, connected a monitor and keyboard to it via its USB-C port, got the distro up and running, configured the wireless networking with a static IP address and installed sshd, and it was ready to go as a headless server for an edge application.
The unit came with no instructions that I can recall. After I got it set up, I remember thinking, “What is wrong with me ? Why is it that I just know how to do all of this without any documentation ?”
Only after I got it up and running and poked around a bit did I realize that this SBC doesn’t have an ARM SoC– it’s a RISC-V SoC. It uses the Allwinner D1, so it looks like I came full circle back to Allwinner.
So I now have my first piece of RISC-V hobbyist kit, although I learned recently from Kostya that it’s not that great for multimedia.
Handheld Gaming Units
The folks at Hardkernel have also produced a series of handheld retro-gaming devices called ODROID-GO. The first one resembled the original Nintendo Game Boy, came as a kit to be assembled, and emulated 5 classic consoles. It also had some hackability to it. Quite a cool little device, and inexpensive too. I have since passed it along to another gaming enthusiast.Later came the ODROID-GO Advance, also a kit, but emulating more devices. I was extremely eager to get my hands on this since it could emulate SNES in addition to NES. It also features a headphone jack, unlike the earlier model. True to form, after I received mine, it took me about 13 months before I got around to assembling it. After that, the biggest challenge I had was trying to find an appropriate case for it.
Even though it may try to copy the general aesthetic and form factor of the Game Boy Advance, cases for the GBA don’t fit this correctly.
Further, Hardkernel have also released the ODROID-GO Super and Ultra models that do more and more. The Advance, Super, and Ultra models have powerful SoCs and feature much more hackability than the first ODROID-GO model.
I know that the guts of the Advance have been used in other products as well. The same is likely true for the Super and Ultra.
Ultimately, the ODROID-GO Advance was just another project I assembled and then set aside since I like the idea of playing old games much more than actually doing it. Plus, the fact has finally crystalized in my mind over the past few years that I have never enjoyed handheld gaming and likely will never enjoy handheld gaming, even after I started wearing glasses. Not that I’m averse to old Game Boy / Color / Advance games, but if I’m going to play them, I’d rather emulate them on a large display.
The Future
In some of my weaker moments, I consider ordering up certain Banana Pi products (like the Banana Pi BPI-R2) with a case and doing my own router tricks using some open source router/firewall solution. And then I remind myself that my existing prosumer-type home router is doing just fine. But maybe one day…The post My SBC Collection first appeared on Breaking Eggs And Making Omelettes.
-
Open Media Developers Track at OVC 2011
11 octobre 2011, par silviaThe Open Video Conference that took place on 10-12 September was so overwhelming, I’ve still not been able to catch my breath ! It was a dense three days for me, even though I only focused on the technology sessions of the conference and utterly missed out on all the policy and content discussions.
Roughly 60 people participated in the Open Media Software (OMS) developers track. This was an amazing group of people capable and willing to shape the future of video technology on the Web :
- HTML5 video developers from Apple, Google, Opera, and Mozilla (though we missed the NZ folks),
- codec developers from WebM, Xiph, and MPEG,
- Web video developers from YouTube, JWPlayer, Kaltura, VideoJS, PopcornJS, etc.,
- content publishers from Wikipedia, Internet Archive, YouTube, Netflix, etc.,
- open source tool developers from FFmpeg, gstreamer, flumotion, VideoLAN, PiTiVi, etc,
- and many more.
To provide a summary of all the discussions would be impossible, so I just want to share the key take-aways that I had from the main sessions.
WebRTC : Realtime Communications and HTML5
Tim Terriberry (Mozilla), Serge Lachapelle (Google) and Ethan Hugg (CISCO) moderated this session together (slides). There are activities both at the W3C and at IETF – the ones at IETF are supposed to focus on protocols, while the W3C ones on HTML5 extensions.
The current proposal of a PeerConnection API has been implemented in WebKit/Chrome as open source. It is expected that Firefox will have an add-on by Q1 next year. It enables video conferencing, including media capture, media encoding, signal processing (echo cancellation etc), secure transmission, and a data stream exchange.
Current discussions are around the signalling protocol and whether SIP needs to be required by the standard. Further, the codec question is under discussion with a question whether to mandate VP8 and Opus, since transcoding gateways are not desirable. Another question is how to measure the quality of the connection and how to report errors so as to allow adaptation.
What always amazes me around RTC is the sheer number of specialised protocols that seem to be required to implement this. WebRTC does not disappoint : in fact, the question was asked whether there could be a lighter alternative than to re-use dozens of years of protocol development – is it over-engineered ? Can desktop players connect to a WebRTC session ?
We are already in a second or third revision of this part of the HTML5 specification and yet it seems the requirements are still being collected. I’m quietly confident that everything is done to make the lives of the Web developer easier, but it sure looks like a huge task.
The Missing Link : Flash to HTML5
Zohar Babin (Kaltura) and myself moderated this session and I must admit that this session was the biggest eye-opener for me amongst all the sessions. There was a large number of Flash developers present in the room and that was great, because sometimes we just don’t listen enough to lessons learnt in the past.
This session gave me one of those aha-moments : it the form of the Flash appendBytes() API function.
The appendBytes() function allows a Flash developer to take a byteArray out of a connected video resource and do something with it – such as feed it to a video for display. When I heard that Web developers want that functionality for JavaScript and the video element, too, I instinctively rejected the idea wondering why on earth would a Web developer want to touch encoded video bytes – why not leave that to the browser.
But as it turns out, this is actually a really powerful enabler of functionality. For example, you can use it to :
- display mid-roll video ads as part of the same video element,
- sequence playlists of videos into the same video element,
- implement DVR functionality (high-speed seeking),
- do mash-ups,
- do video editing,
- adaptive streaming.
This totally blew my mind and I am now completely supportive of having such a function in HTML5. Together with media fragment URIs you could even leave all the header download management for resources to the Web browser and just request time ranges from a video through an appendBytes() function. This would be easier on the Web developer than having to deal with byte ranges and making sure that appropriate decoding pipelines are set up.
Standards for Video Accessibility
Philip Jagenstedt (Opera) and myself moderated this session. We focused on the HTML5 track element and the WebVTT file format. Many issues were identified that will still require work.
One particular topic was to find a standard means of rendering the UI for caption, subtitle, und description selection. For example, what icons should be used to indicate that subtitles or captions are available. While this is not part of the HTML5 specification, it’s still important to get this right across browsers since otherwise users will get confused with diverging interfaces.
Chaptering was discussed and a particular need to allow URLs to directly point at chapters was expressed. I suggested the use of named Media Fragment URLs.
The use of WebVTT for descriptions for the blind was also discussed. A suggestion was made to use the voice tag <v> to allow for “styling” (i.e. selection) of the screen reader voice.
Finally, multitrack audio or video resources were also discussed and the @mediagroup attribute was explained. A question about how to identify the language used in different alternative dubs was asked. This is an issue because @srclang is not on audio or video, only on text, so it’s a missing feature for the multitrack API.
Beyond this session, there was also a breakout session on WebVTT and the track element. As a consequence, a number of bugs were registered in the W3C bug tracker.
WebM : Testing, Metrics and New features
This session was moderated by John Luther and John Koleszar, both of the WebM Project. They started off with a presentation on current work on WebM, which includes quality testing and improvements, and encoder speed improvement. Then they moved on to questions about how to involve the community more.
The community criticised that communication of what is happening around WebM is very scarce. More sharing of information was requested, including a move to using open Google+ hangouts instead of Google internal video conferences. More use of the public bug tracker can also help include the community better.
Another pain point of the community was that code is introduced and removed without much feedback. It was requested to introduce a peer review process. Also it was requested that example code snippets are published when new features are announced so others can replicate the claims.
This all indicates to me that the WebM project is increasingly more open, but that there is still a lot to learn.
Standards for HTTP Adaptive Streaming
This session was moderated by Frank Galligan and Aaron Colwell (Google), and Mark Watson (Netflix).
Mark started off by giving us an introduction to MPEG DASH, the MPEG file format for HTTP adaptive streaming. MPEG has just finalized the format and he was able to show us some examples. DASH is XML-based and thus rather verbose. It is covering all eventualities of what parameters could be switched during transmissions, which makes it very broad. These include trick modes e.g. for fast forwarding, 3D, multi-view and multitrack content.
MPEG have defined profiles – one for live streaming which requires chunking of the files on the server, and one for on-demand which requires keyframe alignment of the files. There are clear specifications for how to do these with MPEG. Such profiles would need to be created for WebM and Ogg Theora, too, to make DASH universally applicable.
Further, the Web case needs a more restrictive adaptation approach, since the video element’s API is already accounting for some of the features that DASH provides for desktop applications. So, a Web-specific profile of DASH would be required.
Then Aaron introduced us to the MediaSource API and in particular the webkitSourceAppend() extension that he has been experimenting with. It is essentially an implementation of the appendBytes() function of Flash, which the Web developers had been asking for just a few sessions earlier. This was likely the biggest announcement of OVC, alas a quiet and technically-focused one.
Aaron explained that he had been trying to find a way to implement HTTP adaptive streaming into WebKit in a way in which it could be standardised. While doing so, he also came across other requirements around such chunked video handling, in particular around dynamic ad insertion, live streaming, DVR functionality (fast forward), constraint video editing, and mashups. While trying to sort out all these requirements, it became clear that it would be very difficult to implement strategies for stream switching, buffering and delivery of video chunks into the browser when so many different and likely contradictory requirements exist. Also, once an approach is implemented and specified for the browser, it becomes very difficult to innovate on it.
Instead, the easiest way to solve it right now and learn about what would be necessary to implement into the browser would be to actually allow Web developers to queue up a chunk of encoded video into a video element for decoding and display. Thus, the webkitSourceAppend() function was born (specification).
The proposed extension to the HTMLMediaElement is as follows :
partial interface HTMLMediaElement // URL passed to src attribute to enable the media source logic. readonly attribute [URL] DOMString webkitMediaSourceURL ;
bool webkitSourceAppend(in Uint8Array data) ;
// end of stream status codes.
const unsigned short EOS_NO_ERROR = 0 ;
const unsigned short EOS_NETWORK_ERR = 1 ;
const unsigned short EOS_DECODE_ERR = 2 ;void webkitSourceEndOfStream(in unsigned short status) ;
// states
const unsigned short SOURCE_CLOSED = 0 ;
const unsigned short SOURCE_OPEN = 1 ;
const unsigned short SOURCE_ENDED = 2 ;readonly attribute unsigned short webkitSourceState ;
;The code is already checked into WebKit, but commented out behind a command-line compiler flag.
Frank then stepped forward to show how webkitSourceAppend() can be used to implement HTTP adaptive streaming. His example uses WebM – there are no examples with MPEG or Ogg yet.
The chunks that Frank’s demo used were 150 video frames long (6.25s) and 5s long audio. Stream switching only switched video, since audio data is much lower bandwidth and more important to retain at high quality. Switching was done on multiplexed files.
Every chunk requires an XHR range request – this could be optimised if the connections were kept open per adaptation. Seeking works, too, but since decoding requires download of a whole chunk, seeking latency is determined by the time it takes to download and decode that chunk.
Similar to DASH, when using this approach for live streaming, the server has to produce one file per chunk, since byte range requests are not possible on a continuously growing file.
Frank did not use DASH as the manifest format for his HTTP adaptive streaming demo, but instead used a hacked-up custom XML format. It would be possible to use JSON or any other format, too.
After this session, I was actually completely blown away by the possibilities that such a simple API extension allows. If I wasn’t sold on the idea of a appendBytes() function in the earlier session, this one completely changed my mind. While I still believe we need to standardise a HTTP adaptive streaming file format that all browsers will support for all codecs, and I still believe that a native implementation for support of such a file format is necessary, I also believe that this approach of webkitSourceAppend() is what HTML needs – and maybe it needs it faster than native HTTP adaptive streaming support.
Standards for Browser Video Playback Metrics
This session was moderated by Zachary Ozer and Pablo Schklowsky (JWPlayer). Their motivation for the topic was, in fact, also HTTP adaptive streaming. Once you leave the decisions about when to do stream switching to JavaScript (through a function such a wekitSourceAppend()), you have to expose stream metrics to the JS developer so they can make informed decisions. The other use cases is, of course, monitoring of the quality of video delivery for reporting to the provider, who may then decide to change their delivery environment.
The discussion found that we really care about metrics on three different levels :
- measuring the network performance (bandwidth)
- measuring the decoding pipeline performance
- measuring the display quality
In the end, it seemed that work previously done by Steve Lacey on a proposal for video metrics was generally acceptable, except for the playbackJitter metric, which may be too aggregate to mean much.
Device Inputs / A/V in the Browser
I didn’t actually attend this session held by Anant Narayanan (Mozilla), but from what I heard, the discussion focused on how to manage permission of access to video camera, microphone and screen, e.g. when multiple applications (tabs) want access or when the same site wants access in a different session. This may apply to real-time communication with screen sharing, but also to photo sharing, video upload, or canvas access to devices e.g. for time lapse photography.
Open Video Editors
This was another session that I wasn’t able to attend, but I believe the creation of good open source video editing software and similar video creation software is really crucial to giving video a broader user appeal.
Jeff Fortin (PiTiVi) moderated this session and I was fascinated to later see his analysis of the lifecycle of open source video editors. It is shocking to see how many people/projects have tried to create an open source video editor and how many have stopped their project. It is likely that the creation of a video editor is such a complex challenge that it requires a larger and more committed open source project – single people will just run out of steam too quickly. This may be comparable to the creation of a Web browser (see the size of the Mozilla project) or a text processing system (see the size of the OpenOffice project).
Jeff also mentioned the need to create open video editor standards around playlist file formats etc. Possibly the Open Video Alliance could help. In any case, something has to be done in this space – maybe this would be a good topic to focus next year’s OVC on ?
Monday’s Breakout Groups
The conference ended officially on Sunday night, but we had a third day of discussions / hackday at the wonderful New York Lawschool venue. We had collected issues of interest during the two previous days and organised the breakout groups on the morning (Schedule).
In the Content Protection/DRM session, Mark Watson from Netflix explained how their API works and that they believe that all we need in browsers is a secure way to exchange keys and an indicator of protection scheme is used – the actual protection scheme would not be implemented by the browser, but be provided by the underlying system (media framework/operating system). I think that until somebody actually implements something in a browser fork and shows how this can be done, we won’t have much progress. In my understanding, we may also need to disable part of the video API for encrypted content, because otherwise you can always e.g. grab frames from the video element into canvas and save them from there.
In the Playlists and Gapless Playback session, there was massive brainstorming about what new cool things can be done with the video element in browsers if playback between snippets can be made seamless. Further discussions were about a standard playlist file formats (such as XSPF, MRSS or M3U), media fragment URIs in playlists for mashups, and the need to expose track metadata for HTML5 media elements.
What more can I say ? It was an amazing three days and the complexity of problems that we’re dealing with is a tribute to how far HTML5 and open video has already come and exciting news for the kind of applications that will be possible (both professional and community) once we’ve solved the problems of today. It will be exciting to see what progress we will have made by next year’s conference.
Thanks go to Google for sponsoring my trip to OVC.
UPDATE : We actually have a mailing list for open media developers who are interested in these and similar topics – do join at http://lists.annodex.net/cgi-bin/mailman/listinfo/foms.
-
Preserving or syncing audio of original video to video fragments
2 mai 2015, par Code_Ed_StudentI currently have a few videos that I want to split into EXACTLY 30 seconds segments. I have been able to accomplish but the audio is not being properly preserve. Its out of sync. I tried playing
arsample
ab
and other libraries but I am not getting the desired outuput. What would be the best way to both split the videos in exactly 30 second frames and preserve the audio ?ffmpeg -i $file -preset medium -map 0 -segment_time 30 -g 225 -r 25 -sc_threshold 0 -force_key_frames expr:gte(t,n_forced*30) -f segment -movflags faststart -vf scale=-1:720,format=yuv420p -vcodec libx264 -crf 20 -codec:a copy $dir/$video_file-%03d.mp4
short snippet of output
Input #0, flv, from '/media/sf_linux_sandbox/hashtag_pull/video-downloads/5b64d7ab-a669-4016-b55e-fe4720cbd843/5b64d7ab-a669-4016-b55e-fe4720cbd843.flv':
Metadata:
moovPosition : 40
avcprofile : 77
avclevel : 31
aacaot : 2
videoframerate : 30
audiochannels : 2
©too : Lavf56.15.102
length : 7334912
sampletype : mp4a
timescale : 48000
Duration: 00:02:32.84, start: 0.000000, bitrate: 2690 kb/s
Stream #0:0: Video: h264 (Main), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 30.30 fps, 29.97 tbr, 1k tbn, 59.94 tbc
Stream #0:1: Audio: aac (LC), 48000 Hz, stereo, fltp
[libx264 @ 0x3663ba0] using SAR=1/1
[libx264 @ 0x3663ba0] using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64
[libx264 @ 0x3663ba0] profile High, level 3.1
[libx264 @ 0x3663ba0] 264 - core 144 r2 40bb568 - H.264/MPEG-4 AVC codec - Copyleft 2003-2014 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=225 keyint_min=22 scenecut=0 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=20.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, segment, to '/media/sf_linux_sandbox/hashtag_pull/video-edits/30/5b64d7ab-a669-4016-b55e-fe4720cbd843/5b64d7ab-a669-4016-b55e-fe4720cbd843-%03d.mp4':
Metadata:
moovPosition : 40
avcprofile : 77
avclevel : 31
aacaot : 2
videoframerate : 30
audiochannels : 2
©too : Lavf56.15.102
length : 7334912
sampletype : mp4a
timescale : 48000
encoder : Lavf56.16.102
Stream #0:0: Video: h264 (libx264), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], q=-1--1, 25 fps, 12800 tbn, 25 tbc
Metadata:
encoder : Lavc56.19.100 libx264
Stream #0:1: Audio: aac, 48000 Hz, stereo
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Stream #0:1 -> #0:1 (copy)