
Recherche avancée
Autres articles (48)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)
Sur d’autres sites (11145)
-
libzvbi-teletextdec : split dvb packet to slices
1er mars 2014, par Marton Balintlibzvbi-teletextdec : split dvb packet to slices
Instead of using the demux function of libzvbi to split the packet to slices
(vbi lines), lets do it ourselves.eliminates the 1 frame delay between page input and output
handles non-ascending line numbers more gracefully
enables us to return error codes on some invalid packets instead of silently
ignoring themSigned-off-by : Marton Balint <cus@passwd.hu>
-
RTP and H.264 (Packetization Mode 1)... Decoding RAW Data... Help understanding the audio and STAP-A packets
12 février 2014, par LaneI am attempting to re-create a video from a Wireshark capture. I have researched extensively and the following links provided me with the most useful information...
How to convert H.264 UDP packets to playable media stream or file (defragmentation) (and the 2 sub-links)
H.264 over RTP - Identify SPS and PPS Frames...I understand from these links and RFC (RTP Payload Format for H.264 Video) that...
-
The Wireshark capture shows a client communicating with a server via RTSP/RTP by making the following calls... OPTIONS, DESCRIBE, SETUP, SETUP, then PLAY (both audio and video tracks exist)
-
The RTSP response from PLAY (that contains the Sequence and Picture Parameter Sets) contains the following (some lines excluded)...
Media Description, name and address (m) : audio 0 RTP/AVP 0
Media Attribute (a) : rtpmap:0 PCMU/8000/1
Media Attribute (a) : control:trackID=1
Media Attribute (a) : x-bufferdelay:0Media Description, name and address (m) : video 0 RTP/AVP 98
Media Attribute (a) : rtpmap:98 H264/90000
Media Attribute (a) : control:trackID=2
Media Attribute (a) : fmtp:98 packetization-mode=1 ;profile-level-id=4D0028 ;sprop-parameter-sets=J00AKI2NYCgC3YC1AQEBQAAA+kAAOpg6GAC3IAAzgC7y40MAFuQABnAF3lwWNF3A,KO48gA==Media Description, name and address (m) : metadata 0 RTP/AVP 100
Media Attribute (a) : rtpmap:100 IQ-METADATA/90000
Media Attribute (a) : control:trackID=3...the packetization-mode=1 means that only NAL Units, STAP-A and FU-A are accepted
- The streaming RTP packets (video only, DynamicRTP-Type-98) arrive in the following order...
1x
[RTP Header]
0x78 0x00 (Type is 24, meaning STAP-A)
[Remaining Payload]36x
[RTP Header]
0x7c (Type is 28, meaning FU-A) then either 0x85 (first) 0x05 (middle) or 0x45 (last)
[Remaining Payload]1x
[RTP Header]
0x18 0x00 (Type is 24, meaning STAP-A)
[Remaining Payload]8x
[RTP Header]
0x5c (Type is 28, meaning FU-A) then either 0x81 (first) 0x01 (middle) or 0x41 (last)
[Remaining Payload]...the cycle then repeats... typically there are 29 0x18/0x5c RTP packets for each 0x78/0x7c packet
- Approximately every 100 packets, there is an audio RTP packet, all have their Marker set to true and their sequence numbers ascend as expected. Sometimes there is an individual RTP audio packet and sometimes there are three, see a sample one here...
RTP 1042 PT=ITU-T G.711 PCMU, SSRC=0x238E1F29, Seq=31957, Time=1025208762, Mark
...also, the type of each audio RTP packet is different (as far as first bytes go... I see 0x4e, 0x55, 0xc5, 0xc1, 0xbc, 0x3c, 0x4d, 0x5f, 0xcc, 0xce, 0xdc, 0x3e, 0xbf, 0x43, 0xc9, and more)
- From what I gather... to re-create the video, I first need to create a file of the format
0x000001 [SPS Payload]
0x000001 [PPS Payload]
0x000001 [Complete H.264 Frame (NAL Byte, followed by all fragmented RTP payloads without the first 2 bytes)
0x000001 [Next Frame]
Etc...I made some progress where I can run "ffmpeg -i file" without it saying a bad input format or unable to find codec. But currently it complains something about MP3. My questions are as follows...
-
Should I be using the SPS and PPS payload returned by the response to the DESCRIBE RTSP call or use the data sent in the first STAP-A RTP packets (0x78 and 0x18) ?
-
How does the file format change to incorporate the audio track ?
-
Why is the audio track payload headers all over the place and how can I make sense / utilize them ?
-
Is my understanding of anything incorrect ?
Any help is GREATLY appreciated, thanks !
-
-
How to interpret ffmpeg -pix_fmts output ?
11 août 2015, par RichAmberaleRunning ffmpeg -pix_fmts returns a list of formats. Snip :
IO... yuv444p16be 3 48
..H.. vdpau_mpeg4 0 0
..H.. dxva2_vld 0 0
IO... rgb444le 3 12
IO... rgb444be 3 12
IO... bgr444le 3 12What do the I O and H on the right side mean ? What are the numbers in the 2 leftmost columns ?