
Recherche avancée
Médias (17)
-
Matmos - Action at a Distance
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
DJ Dolores - Oslodum 2004 (includes (cc) sample of “Oslodum” by Gilberto Gil)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Danger Mouse & Jemini - What U Sittin’ On ? (starring Cee Lo and Tha Alkaholiks)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Cornelius - Wataridori 2
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Rapture - Sister Saviour (Blackstrobe Remix)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Chuck D with Fine Arts Militia - No Meaning No
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (40)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (5901)
-
FFmpeg : unspecified pixel format when opening video with custom context
14 février 2021, par PedroI am trying to decode a video with a custom context. The purpose is that I want to decode the video directly from memory. In the following code, I am reading from file in the
read
function passed toavio_alloc_context
- but this is just for testing purposes.


I think I've read any post there is on Stackoverflow or on any other website related to this topic. At least I definitely tried my best to do so. While there is much in common, the details differ : people set different flags, some say
av_probe_input_format
is required, some say it isn't, etc. And for some reason nothing works for me.


My problem is that the pixel format is unspecified (see output below), which is why I run into problems later when calling
sws_getContext
. I checkedpFormatContext->streams[videoStreamIndex]->codec->pix_fmt
, and it is -1.


Please note my comments
// things I tried
and// seems not to help
in the code. I think, the answer might be hidden somehwere there. I tried many combinations of hints that I've read so far, but I am missing a detail I guess.


The problem is not the video file, because when I go the standard way and just call
avformat_open_input(&pFormatContext, pFilePath, NULL, NULL)
without a custom context, everything runs fine.


The code compiles and runs as is.



#include <libavformat></libavformat>avformat.h>
#include 
#include 

FILE *f;

static int read(void *opaque, uint8_t *buf, int buf_size) {
 if (feof(f)) return -1;
 return fread(buf, 1, buf_size, f);
}

int openVideo(const char *pFilePath) {
 const int bufferSize = 32768;
 int ret;

 av_register_all();

 f = fopen(pFilePath, "rb");
 uint8_t *pBuffer = (uint8_t *) av_malloc(bufferSize + AVPROBE_PADDING_SIZE);
 AVIOContext *pAVIOContext = avio_alloc_context(pBuffer, bufferSize, 0, NULL,
 &read, NULL, NULL);

 if (!f || !pBuffer || !pAVIOContext) {
 printf("error: open / alloc failed\n");
 // cleanup...
 return 1;
 }

 AVFormatContext *pFormatContext = avformat_alloc_context();
 pFormatContext->pb = pAVIOContext;

 const int readBytes = read(NULL, pBuffer, bufferSize);

 printf("readBytes = %i\n", readBytes);

 if (readBytes <= 0) {
 printf("error: read failed\n");
 // cleanup...
 return 2;
 }

 if (fseek(f, 0, SEEK_SET) != 0) {
 printf("error: fseek failed\n");
 // cleanup...
 return 3;
 }

 // required for av_probe_input_format
 memset(pBuffer + readBytes, 0, AVPROBE_PADDING_SIZE);

 AVProbeData probeData;
 probeData.buf = pBuffer;
 probeData.buf_size = readBytes;
 probeData.filename = "";
 probeData.mime_type = NULL;

 pFormatContext->iformat = av_probe_input_format(&probeData, 1);

 // things I tried:
 //pFormatContext->flags = AVFMT_FLAG_CUSTOM_IO;
 //pFormatContext->iformat->flags |= AVFMT_NOFILE;
 //pFormatContext->iformat->read_header = NULL;

 // seems not to help (therefore commented out here):
 AVDictionary *pDictionary = NULL;
 //av_dict_set(&pDictionary, "analyzeduration", "8000000", 0);
 //av_dict_set(&pDictionary, "probesize", "8000000", 0);

 if ((ret = avformat_open_input(&pFormatContext, "", NULL, &pDictionary)) < 0) {
 char buffer[4096];
 av_strerror(ret, buffer, sizeof(buffer));
 printf("error: avformat_open_input failed: %s\n", buffer);
 // cleanup...
 return 4;
 }

 printf("retrieving stream information...\n");

 if ((ret = avformat_find_stream_info(pFormatContext, NULL)) < 0) {
 char buffer[4096];
 av_strerror(ret, buffer, sizeof(buffer));
 printf("error: avformat_find_stream_info failed: %s\n", buffer);
 // cleanup...
 return 5;
 }

 printf("nb_streams = %i\n", pFormatContext->nb_streams);

 // further code...

 // cleanup...
 return 0;
}

int main() {
 openVideo("video.mp4");
 return 0;
}




This is the output that I get :

readBytes = 32768

retrieving stream information...

[mov,mp4,m4a,3gp,3g2,mj2 @ 0xdf8d20] stream 0, offset 0x30 : partial file
[mov,mp4,m4a,3gp,3g2,mj2 @ 0xdf8d20] Could not find codec parameters for stream 0 (Video : h264 (avc1 / 0x31637661), none, 640x360, 351 kb/s) : unspecified pixel format

Consider increasing the value for the 'analyzeduration' and 'probesize' options

nb_streams = 2


UPDATE :

Thanks to WLGfx, here is the solution : The only thing that was missing was the seek function. Apparently, implementing it is mandatory for decoding. It is important to return the new offset - and not 0 in case of success (some solutions found in the web just return the return value of fseek, and that is wrong). Here is the minimal solution that made it work :


static int64_t seek(void *opaque, int64_t offset, int whence) {
 if (whence == SEEK_SET && fseek(f, offset, SEEK_SET) == 0) {
 return offset;
 }
 // handling AVSEEK_SIZE doesn't seem mandatory
 return -1;
}




Of course, the call to
avio_alloc_context
needs to be adapted accordingly :


AVIOContext *pAVIOContext = avio_alloc_context(pBuffer, bufferSize, 0, NULL,
 &read, NULL, &seek);



-
ffmpeg problems with streaming mp4 over udp in local network
28 novembre 2019, par AJ ColeI’m streaming mp4 video files (some of them are avi converted to mp4 with ffmpeg earlier) over
udp://232.255.23.23:1234
from linux (embedded) with ffmpeg v3.4.2 to multiple linux (antix) machines that play the stream with MPV, all of this happens in local network so I expected it to work flawlessly, but unfortunately it doesn’t.Here are the original commands I tried to use :
ffmpeg
ffmpeg -re -i PATH_TO_FILE.mp4 -c copy -f mpegts udp://232.255.23.23:1234
mpv
mpv --no-config --geometry=[geo settings] --no-border udp://232.255.23.23:1234
This seemed to woork good, however a problem appeared that on the displaying end, the stream is actually much longer than the streamed content itself. The mp4 files in total have 5 minutes 36 seconds, and mpv plays the entire stream loop in >= 6 minutes. I think it’s happening because of dropped frames, that mpv waits to recover or something and therefore extends the length of the actual content. This cannot work in my case, as I have a precise time gap for displaying the stream and it cannot be longer than the streamed content.
All the content is made in 1680x800 resolution and is displayed on a screen with 1680x1050 resoltion (positioned with mpv geometry)It appears that using this command for mpv :
mpv --no-config --framedrop=no --geometry=[geo settings] --no-border udp://232.255.23.23:1234
made the duration correct, however this introduces huge artifacts in the videos sometimes.
I read that using
-re
for streaming can cause these frame drops, so I tried putting a static number of fps for both file input and output stream, for example :ffmpeg -re -i PATH_TO_FILE.mp4 -c copy -r 25 -f mpegts udp://232.255.23.23:1234
This reads the file at native framerate and outputs the stream at 25fps, and it appears to have the timing duration correct, but it also causes occasional articats and I think has worse qualit overall. Output from mpv when one of the artifacts happened :
[ffmpeg/video] h264: cabac decode of qscale diff failed at 85 19
[ffmpeg/video] h264: error while decoding MB 85 19, bytestream 85515I also tried using
--untimed
or--no-cache
in mpv, but this causes stutters in the videoI’m also getting requent
Invalid video timestamp
warnings in MPV, for example :Invalid video timestamp: 1.208333 -> -8.711667
Playing in mpv without
--no-config
and with--untimed
added also causes frequent artifacts :V: -00:00:00 / 00:00:00 Cache: 0s+266KB
[ffmpeg/video] h264: Invalid NAL unit 8, skipping.
V: -00:00:00 / 00:00:00 Cache: 0s+274KB
[ffmpeg/video] h264: Reference 4 >= 4
[ffmpeg/video] h264: error while decoding MB 6 0, bytestream 31474
[ffmpeg/video] h264: error while decoding MB 78 49, bytestream -12
V: 00:00:06 / 00:00:00 Cache: 5s+11KB
Invalid video timestamp: 6.288333 -> -8.724933
V: -00:00:05 / 00:00:00 Cache: 3s+0KB
[ffmpeg/video] h264: Invalid NAL unit 8, skipping.
[ffmpeg/video] h264: error while decoding MB 59 24, bytestream -27
V: -00:00:04 / 00:00:00 Cache: 3s+0KB
[ffmpeg/video] h264: Reference 4 >= 3
[ffmpeg/video] h264: error while decoding MB 5 2, bytestream 13402
V: -00:00:03 / 00:00:00 Cache: 2s+0KB
[ffmpeg/video] h264: Reference 5 >= 4
[ffmpeg/video] h264: error while decoding MB 51 21, bytestream 9415I tried playing the stream with ffplay and it also caused the videos to be "played" 20 seconds longer.
Is there any way to keep the streaming duration intact and prevent those huge artifacts ? These aren’t any huge video files, they are few MB each, everything happens in local network so the latencies are minimal.Output from ffmpeg when streaming one of the files :
libavutil 55. 78.100 / 55. 78.100
libavcodec 57.107.100 / 57.107.100
libavformat 57. 83.100 / 57. 83.100
libavdevice 57. 10.100 / 57. 10.100
libavfilter 6.107.100 / 6.107.100
libswscale 4. 8.100 / 4. 8.100
libswresample 2. 9.100 / 2. 9.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'SDM.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.48.100
Duration: 00:00:20.00, start: 0.000000, bitrate: 1883 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1680x800 [SAR 1:1 DAR 21:10], 1880 kb/s, 24 fps, 24 tbr, 12288 tbn, 48 tbc (default)
Metadata:
handler_name : VideoHandler
Output #0, mpegts, to 'udp://232.255.23.23:1234':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.83.100
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1680x800 [SAR 1:1 DAR 21:10], q=2-31, 1880 kb/s, 24 fps, 24 tbr, 90k tbn, 25 tbc (default)
Metadata:
handler_name : VideoHandler
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
frame= 480 fps= 24 q=-1.0 Lsize= 5009kB time=00:00:19.87 bitrate=2064.7kbits/s speed= 1x
video:4592kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 9.082929%Edit : all the files don’t contain any audio, so it should be even less traffic on the network
-
What is “interoperable TTML” ?
19 septembre 2012, par silviaI’ve just tried to come to terms with the latest state of TTML, the Timed Text Markup Language.
TTML has been specified by the W3C Timed Text Working Group and released as a RECommendation v1.0 in November 2010. Since then, several organisations have tried to adopt it as their caption file format. This includes the SMPTE, the EBU (European Broadcasting Union), and Microsoft.
Both, Microsoft and the EBU actually looked at TTML in detail and decided that in order to make it usable for their use cases, a restriction of its functionalities is needed.
EBU-TT
The EBU released EBU-TT, which restricts the set of valid attributes and feature. “The EBU-TT format is intended to constrain the features provided by TTML, especially to make EBU-TT more suitable for the use with broadcast video and web video applications.” (see EBU-TT).
In addition, EBU-specific namespaces were introduce to extend TTML with EBU-specific data types, e.g. ebuttdt:frameRateMultiplierType or ebuttdt:smpteTimingType. Similarly, a bunch of metadata elements were introduced, e.g. ebuttm:documentMetadata, ebuttm:documentEbuttVersion, or ebuttm:documentIdentifier.
The use of namespaces as an extensibility mechanism will ascertain that EBU-TT files continue to be valid TTML files. However, any vanilla TTML parser will not know what to do with these custom extensions and will drop them on the floor.
Simple Delivery Profile
With the intention to make TTML ready for “internet delivery of Captions originated in the United States”, Microsoft proposed a “Simple Delivery Profile for Closed Captions (US)” (see Simple Profile). The Simple Profile is also a restriction of TTML.
Unfortunately, the Microsoft profile is not the same as the EBU-TT profile : for example, it contains the “set” element, which is not conformant in EBU-TT. Similarly, the supported style features are different, e.g. Simple Profile supports “display-region”, while EBU-TT does not. On the other hand, EBU-TT supports monospace, sans-serif and serif fonts, while the Simple profile does not.
Thus files created for the Simple Delivery Profile will not work on players that expect EBU-TT and the reverse.
Fortunately, the Simple Delivery Profile does not introduce any new namespaces and new features, so at least it is an explicit subpart of TTML and not both a restriction and extension like EBU-TT.
SMPTE-TT
SMPTE also created a version of the TTML standard called SMPTE-TT. SMPTE did not decide on a subset of TTML for their purposes – it was simply adopted as a complete set. “This Standard provides a framework for timed text to be supported for content delivered via broadband means,…” (see SMPTE-TT).
However, SMPTE extended TTML in SMPTE-TT with an ability to store a binary blob with captions in another format. This allows using SMPTE-TT as a transport format for any caption format and is deemed to help with “backwards compatibility”.
Now, instead of specifying a profile, SMPTE decided to define how to convert CEA-608 captions to SMPTE-TT. Even if it’s not called a “profile”, that’s actually what it is. It even has its own namespace : “m608 :”.
Conclusion
With all these different versions of TTML, I ask myself what a video player that claims support for TTML will do to get something working. The only chance it has is to implement all the extensions defined in all the different profiles. I pity the player that has to deal with a SMPTE-TT file that has a binary blob in it and is expected to be able to decode this.
Now, what is a caption author supposed to do when creating TTML ? They obviously cannot expect all players to be able to play back all TTML versions. Should they create different files depending on what platform they are targeting, i.e. a EBU-TT version, a SMPTE-TT version, a vanilla TTML version, and a Simple Delivery Profile version ? Should they by throwing all the features of all the versions into one TTML file and hope that the players will pick out the right things that they require and drop the rest on the floor ?
Maybe the best way to progress would be to make a list of the “safe” features : those features that every TTML profile supports. That may be the best way to get an “interoperable TTML” file. Here’s me hoping that this minimal set of features doesn’t just end up being the usual (starttime, endtime, text) triple.
UPDATE :
I just found out that UltraViolet have their own profile of SMPTE-TT called CFF-TT (see UltraViolet FAQ and spec). They are making some SMPTE-TT fields optional, but introduce a new @forcedDisplayMode attribute under their own namespace “cff :”.