
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (61)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (10241)
-
Getting either incorrect output resolution or FPS from ffmpeg
5 novembre 2011, par AdamI am capturing an RTSP stream from a security camera, and transcoding it for (live streaming) to iphone, using OSX as the encoding platform.
I have it working correctly, and Im tuning it.
However, it seems that it is not outputting the requested resolution. This is my script/Applications/SecurityCamera/openRTSP -v -c -t rtsp://10.0.1.118/ch1-s1 | \
/Applications/SecurityCamera/ffmpeg \
-r 10 -i - \
-y -an -ab 64000 -f mpegts -vcodec copy -s 960x640 \
-flags +loop -cmp +chroma -partitions +parti4x4+partp8x8+partb8x8 \
-subq 5 -trellis 1 -refs 1 -coder 0 -me_range 16 -keyint_min 25 \
-sc_threshold 40 -i_qfactor 0.71 -bt 400k -maxrate 524288 -bufsize 524288 \
-qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -level 30 \
-aspect 960:640 -r 10 -g 10 -async 2 -\
|/Applications/SecurityCamera/mediastreamsegmenter -b http://localhost:8080/\
-f /Library/WebServer/Documents/ -i stream.m3u8 -t 10 -s 4 -DThis is the status report :
Input #0, h264, from 'pipe:':
Duration: N/A, bitrate: N/A
Stream #0.0: Video: h264, yuv420p, 1600x1200, 10 fps, 10 tbr, 1200k tbn, 20 tbc
[mpegts @ 0x10100c200] muxrate VBR, pcr every 1 pkts, sdt every 200, pat/pmt every 40 pkts
Output #0, mpegts, to 'pipe:':
Metadata:
encoder : Lavf52.93.0
Stream #0.0: Video: libx264, yuv420p, 1600x1200 [PAR 1:1 DAR 4:3], q=2-31, 90k tbn, 10 tbc
Stream mapping:
Stream #0.0 -> #0.0You can see that its working, but it is outputting 1600x1200 for some reason. (possibly
-vcodec copy
copies all codec parameters, not just the codec type ?)If I change the
-vcodec copy
to-vcodec libx264
then I get the correct status report (stating 960x640, correct), but the streaming switches to 2 FPS (why ? Im forcing both input and output !) and it halts after 54 frames (see output below)Seems stream 0 codec frame rate differs from container frame rate: 20.00 (20/1) -> 10.00 (20/2)
Input #0, h264, from 'pipe:':
Duration: N/A, bitrate: N/A
Stream #0.0: Video: h264, yuv420p, 1600x1200, 10 fps, 10 tbr, 1200k tbn, 20 tbc
[buffer @ 0x100d02420] w:1600 h:1200 pixfmt:yuv420p
[scale @ 0x100d026f0] w:1600 h:1200 fmt:yuv420p -> w:960 h:640 fmt:yuv420p flags:0x4
[libx264 @ 0x10100d400] using SAR=1/1
[libx264 @ 0x10100d400] frame MB size (60x40) > level limit (1620)
[libx264 @ 0x10100d400] using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64 SlowCTZ SlowAtom
[libx264 @ 0x10100d400] profile Constrained Baseline, level 3.0
[mpegts @ 0x10100c200] muxrate VBR, pcr every 1 pkts, sdt every 200, pat/pmt every 40 pkts
Output #0, mpegts, to 'pipe:':
Metadata:
encoder : Lavf52.93.0
Stream #0.0: Video: libx264, yuv420p, 960x640 [PAR 1:1 DAR 3:2], q=10-51, 200 kb/s, 90k tbn, 10 tbc
Stream mapping:
Stream #0.0 -> #0.0
read pmap fffps= 3 q=37.0 size= 37kB time=0.10 bitrate=3008.0kbits/s bits/s
video pid set at 100
found sequence start
next segment value 1026000
written bytes 376 skipped 0
frame= 54 fps= 2 q=-1.0 Lsize= 160kB time=5.40 bitrate= 242.0kbits/s
video:141kB audio:0kB global headers:0kB muxing overhead 12.872737%
frame I:6 Avg QP:34.68 size: 23524
[libx264 @ 0x10100d400] frame P:48 Avg QP:41.53 size: 75
[libx264 @ 0x10100d400] mb I I16..4: 63.9% 0.0% 36.1%
[libx264 @ 0x10100d400] mb P I16..4: 0.1% 0.0% 0.0% P16..4: 0.8% 0.1% 0.0% 0.0% 0.0% skip:99.0%
[libx264 @ 0x10100d400] final ratefactor: 38.54
[libx264 @ 0x10100d400] coded y,uvDC,uvAC intra: 57.7% 22.3% 2.0% inter: 0.0% 0.1% 0.0%
[libx264 @ 0x10100d400] i16 v,h,dc,p: 23% 35% 27% 15%
[libx264 @ 0x10100d400] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 32% 16% 4% 3% 3% 7% 4% 8%
[libx264 @ 0x10100d400] i8c dc,h,v,p: 83% 11% 5% 0%
[libx264 @ 0x10100d400] kb/s:214.43 -
Generating 64kbps audio-only mpegts for HTTP Live segmenter to meept 64kbps audio only requirement
23 avril 2012, par PobreI am trying to convert our mp4 files into mpeg-ts and segment it into .ts files for my iphone app to play. I am using Carson McDonalds's HTTP-Live-Video-Stream-Segmenter-and-Distributor to do that.
I got his stuff complied and working correctly. I am currently trying to meet Apple's requirement where I need to provide a baseline 64 kbps audio only stream to my m3u8 playlist.
Carson doesn't seem to have a profile for that.I need to be able to generate 64kbps audio-only stream from mp4, and turn that into mpeg-ts for the segmenter into ts. I am trying to find the right ffmpeg command that will validate without problem using Apple's mediastreamvalidator.
So far I modified an existing encoding profile to try to achieve 64kbps total :
ffmpeg -er 4 -i %s -f mpegts -acodec libmp3lame -ar 22050 -ab 32k -s 240x180 -vcodec libx264 -b 16k -flags +loop+mv4 -cmp 256 -partitions +parti4x4+partp8x8+partb8x8 -subq 7 -trellis 1 -refs 5 -coder 0 -me_range 16 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -bt 64k -maxrate 16k -bufsize 16k -rc_eq 'blurCplx^(1-qComp)' -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -level 30 -aspect 4:3 -r 10 -g 30 -async 2 - | %s %s %s %s %s
but then when I try to validate it using mediastreamvalidator, it gives error after few ts :
Playlist Validation : OK
Segments :
sample_cell_4x3_64k-00001.ts :
WARNING : Media segment exceeds target duration of 10.00 seconds by 1.30 seconds (segment duration is 11.30 seconds)
sample_cell_4x3_64k-00002.ts :
WARNING : Media segment exceeds target duration of 10.00 seconds by 1.40 seconds (segment duration is 11.40 seconds)
....
....sample_cell_4x3_64k-00006.ts :
ERROR : (-1) Unknown video codec : 1836069494 (program 0, track 0)
ERROR : (-1) failed to parse segment as either an MPEG-2 TS or an ESsample_cell_4x3_64k-00007.ts :
ERROR : (-1) Unknown video codec : 1836069494 (program 0, track 0)
ERROR : (-1) failed to parse segment as either an MPEG-2 TS or an ES....
....
Average segment duration : 10.26 seconds
Average segment bitrate : 376797.92 bps
Average segment structural overhead : 349242.17 bps (92.69 %)Is there someway I can generate this correctly with just audio which totals 64kbps and turn it into mpeg-ts ready to be segmented and validated correctly ?
Am I approaching the problem right ?
-
How to build and link FFMPEG to iOS ?
30 juin 2015, par Alexander Tkachenkoall !
I know, there are a lot of questions here about FFMPEG on iOS, but no one answer is appropriate for my case :(
Something strange happens each case when I am trying to link FFMPEG in my project, so please, help me !My task is to write video-chat application for iOS, that uses RTMP-protocol for publishing and reading video-stream to/from custom Flash Media Server.
I decided to use rtmplib, free open-source library for streaming FLV video over RTMP, as it is the only appropriate library.
Many problem appeared when I began research of it, but later I understood how it should work.
Now I can read live stream of FLV video(from url) and send it back to channel, with the help of my application.
My trouble now is in sending video FROM Camera.
Basic operations sequence, as I understood, should be the following :-
Using AVFoundation, with the help of sequence (Device-AVCaptureSession-AVVideoDataOutput-> AVAssetWriter) I write this to a file(If you need, I can describe this flow more detailed, but in the context of question it is not important). This flow is necessary to make hardware-accelerated conversion of live video from the camera into H.264 codec. But it is in MOV container format. (This is completed step)
-
I read this temporary file with each sample written, and obtain the stream of bytes of video-data, (H.264 encoded, in QuickTime container). (this is allready completed step)
-
I need to convert videodata from QuickTime container format to FLV. And it all in real-time.(packet - by - packet)
-
If i will have the packets of video-data, contained in FLV container format, I will be able to send packets over RTMP using rtmplib.
Now, the most complicated part for me, is step 3.
I think, I need to use ffmpeg lib to this conversion (libavformat). I even found the source code, showing how to decode h.264 data packets from MOV file (looking in libavformat, i found that it is possible to extract this packets even from byte stream, which is more appropriate for me). And having this completed, I will need to encode packets into FLV(using ffmpeg or manually, in a way of adding FLV-headers to h.264 packets, it is not problem and is easy, if I am correct).
FFMPEG has great documentation and is very powerfull library, and I think, there won’t be a problem to use it. BUT the problem here is that I can not got it working in iOS project.
I have spend 3 days reading documentation, stackoverflow and googling the answer on the question "How to build FFMPEG for iOS" and I think, my PM is gonna fire me if I will spend one more week on trying to compile this library :))
I tried to use many different build-scripts and configure files, but when I build FFMPEG, i Got libavformat, libavcodec, etc. for x86 architecture (even when I specify armv6 arch in build-script). (I use "lipo -info libavcodec.a" to show architectures)
So I cannot build this sources, and decided to find prebuilt FFMPEG, that is build for architecture armv7, armv6, i386.
I have downloaded iOS Comm Lib from MidnightCoders from github, and it contains example of usage FFMPEG, it contains prebuilt .a files of avcodec,avformat and another FFMPEG libraries.
I check their architecture :
iMac-2:MediaLibiOS root# lipo -info libavformat.a
Architectures in the fat file: libavformat.a are: armv6 armv7 i386And I found that it is appropriate for me !
When I tried to add this libraries and headers to xCode project, It compiles fine(and I even have no warnings like "Library is compiled for another architecture"), and I can use structures from headers, but when I am trying to call C-function from libavformat (av_register_all()), the compiler show me error message "Symbol(s) not found for architecture armv7 : av_register_all".I thought, that maybe there are no symbols in lib, and tried to show them :
root# nm -arch armv6 libavformat.a | grep av_register_all
00000000 T _av_register_allNow I am stuck here, I don’t understand, why xCode can not see this symbols, and can not move forward.
Please, correct me if I am wrong in the understanding of flow for publishing RTMP-stream from iOS, and help me in building and linking FFMPEG for iOS.
I have iPhone 5.1. SDK and xCode 4.2.
-