
Recherche avancée
Médias (91)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Wired NextMusic
14 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (111)
-
Menus personnalisés
14 novembre 2010, parMediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
Menus créés à l’initialisation du site
Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...) -
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users.
Sur d’autres sites (12535)
-
Bootstrapping an AI UGC system — video generation is expensive, APIs are limiting, and I need help navigating it all [closed]
24 juin, par Barack _ OumaI’m building a solo AI-powered UGC (User-Generated Content) platform — something that automates the creation of short-form content using AI avatars, voices, visuals, and scripts. But I’ve hit a wall with video generation and API limitations.


So far, I’ve integrated TTS and voice cloning (using ElevenLabs), and I’ve gotten image generation working. But video generation (especially talking avatars) has been a nightmare — both financially and technically.


🛠️ Features I’m trying to build :


AI avatars (face + lip-syncing)
Script generation (LLM-driven)
Image generation
Video composition


I’m trying to build an AI faceless content creation automtion platform alternative to Makeugc.com or Reelfarm.org or postbridge.com — just trying to create a working pipeline for automated content.


❌ Challenges so far :


Services like D-ID, Synthesia, Magic Hour, and Luma are either paywalled, have no trials, or are very expensive.


D-ID does support avatar creation, but you need to pay upfront to even access those features. There's no easy/free entry point.


Tools like Google Veo 3 are powerful but clearly not accessible for indie builders.
I’ve looked into open-source models like WAN 2.1, CogVideo, etc., but I have no clue how to run them or what infra is needed.


Now I’m torn between buying my own GPU or renting compute power to self-host these models.


💸 Cost is a huge blocker


I’ve been looking through Replicate’s pricing, and while some models (especially image gen) are manageable, video models get expensive fast. Even GPU rental rates stack up quickly, especially if you’re testing often or experimenting with pipelines. Plus, idle time billing doesn’t help.


💭 What I could really use help with :


Has anyone successfully stitched together APIs (voice, avatar, video) into a working UGC pipeline ?


Should I use separate services (e.g. ElevenLabs + Synthesia + WAN) or try to host my own end-to-end system ?


Is it cheaper (long term) to buy a used GPU like a 4090 and run things locally ? Or better to rent compute short-term ?


Any open-source solutions that are beginner-friendly or have minimal setup ?
Any existing frameworks or wrappers for UGC media pipelines that make all this easier ?


I’ve spent weeks researching, testing APIs, and hitting walls — and while I’ve learned a lot, I’d really appreciate any guidance from folks who’ve been here before.
Thanks in advance 🙏


And good luck to everyone else trying to build with AI on a budget — this stuff isn’t as plug-and-play as it looks on launch videos 💀


-
Beware the builtins
14 janvier 2010, par Mans — CompilersGCC includes a large number of builtin functions allegedly providing optimised code for common operations not easily expressed directly in C. Rather than taking such claims at face value (this is GCC after all), I decided to conduct a small investigation to see how well a few of these functions are actually implemented for various targets.
For my test, I selected the following functions :
__builtin_bswap32
: Byte-swap a 32-bit word.__builtin_bswap64
: Byte-swap a 64-bit word.__builtin_clz
: Count leading zeros in a word.__builtin_ctz
: Count trailing zeros in a word.__builtin_prefetch
: Prefetch data into cache.
To test the quality of these builtins, I wrapped each in a normal function, then compiled the code for these targets :
- ARMv7
- AVR32
- MIPS
- MIPS64
- PowerPC
- PowerPC64
- x86
- x86_64
In all cases I used compiler flags were
-O3 -fomit-frame-pointer
plus any flags required to select a modern CPU model.
ARM
Both
__builtin_clz
and__builtin_prefetch
generate the expectedCLZ
andPLD
instructions respectively. The code for__builtin_ctz
is reasonable for ARMv6 and earlier :rsb r3, r0, #0 and r0, r3, r0 clz r0, r0 rsb r0, r0, #31
For ARMv7 (in fact v6T2), however, using the new bit-reversal instruction would have been better :
rbit r0, r0 clz r0, r0
I suspect this is simply a matter of the function not yet having been updated for ARMv7, which is perhaps even excusable given the relatively rare use cases for it.
The byte-reversal functions are where it gets shocking. Rather than use the
REV
instruction found from ARMv6 on, both of them generate external calls to__bswapsi2
and__bswapdi2
in libgcc, which is plain C code :SItype __bswapsi2 (SItype u) return ((((u) & 0xff000000) >> 24) | (((u) & 0x00ff0000) >> 8) | (((u) & 0x0000ff00) << 8) | (((u) & 0x000000ff) << 24)) ;
DItype
__bswapdi2 (DItype u)
return ((((u) & 0xff00000000000000ull) >> 56)
| (((u) & 0x00ff000000000000ull) >> 40)
| (((u) & 0x0000ff0000000000ull) >> 24)
| (((u) & 0x000000ff00000000ull) >> 8)
| (((u) & 0x00000000ff000000ull) << 8)
| (((u) & 0x0000000000ff0000ull) << 24)
| (((u) & 0x000000000000ff00ull) << 40)
| (((u) & 0x00000000000000ffull) << 56)) ;
While the 32-bit version compiles to a reasonable-looking shift/mask/or job, the 64-bit one is a real WTF. Brace yourselves :
push r4, r5, r6, r7, r8, r9, sl, fp mov r5, #0 mov r6, #65280 ; 0xff00 sub sp, sp, #40 ; 0x28 and r7, r0, r5 and r8, r1, r6 str r7, [sp, #8] str r8, [sp, #12] mov r9, #0 mov r4, r1 and r5, r0, r9 mov sl, #255 ; 0xff ldr r9, [sp, #8] and r6, r4, sl mov ip, #16711680 ; 0xff0000 str r5, [sp, #16] str r6, [sp, #20] lsl r2, r0, #24 and ip, ip, r1 lsr r7, r4, #24 mov r1, #0 lsr r5, r9, #24 mov sl, #0 mov r9, #-16777216 ; 0xff000000 and fp, r0, r9 lsr r6, ip, #8 orr r9, r7, r1 and ip, r4, sl orr sl, r1, r2 str r6, [sp] str r9, [sp, #32] str sl, [sp, #36] ; 0x24 add r8, sp, #32 ldm r8, r7, r8 str r1, [sp, #4] ldm sp, r9, sl orr r7, r7, r9 orr r8, r8, sl str r7, [sp, #32] str r8, [sp, #36] ; 0x24 mov r3, r0 mov r7, #16711680 ; 0xff0000 mov r8, #0 and r9, r3, r7 and sl, r4, r8 ldr r0, [sp, #16] str fp, [sp, #24] str ip, [sp, #28] stm sp, r9, sl ldr r7, [sp, #20] ldr sl, [sp, #12] ldr fp, [sp, #12] ldr r8, [sp, #28] lsr r0, r0, #8 orr r7, r0, r7, lsl #24 lsr r6, sl, #24 orr r5, r5, fp, lsl #8 lsl sl, r8, #8 mov fp, r7 add r8, sp, #32 ldm r8, r7, r8 orr r6, r6, r8 ldr r8, [sp, #20] ldr r0, [sp, #24] orr r5, r5, r7 lsr r8, r8, #8 orr sl, sl, r0, lsr #24 mov ip, r8 ldr r0, [sp, #4] orr fp, fp, r5 ldr r5, [sp, #24] orr ip, ip, r6 ldr r6, [sp] lsl r9, r5, #8 lsl r8, r0, #24 orr fp, fp, r9 lsl r3, r3, #8 orr r8, r8, r6, lsr #8 orr ip, ip, sl lsl r7, r6, #24 and r5, r3, #16711680 ; 0xff0000 orr r7, r7, fp orr r8, r8, ip orr r4, r1, r7 orr r5, r5, r8 mov r9, r6 mov r1, r5 mov r0, r4 add sp, sp, #40 ; 0x28 pop r4, r5, r6, r7, r8, r9, sl, fp bx lr
That’s right, 91 instructions to move 8 bytes around a bit. GCC definitely has a problem with 64-bit numbers. It is perhaps worth noting that the
bswap_64
macro in glibc splits the 64-bit value into 32-bit halves which are then reversed independently, thus side-stepping this weakness of gcc.As a side note, ARM RVCT (armcc) compiles those functions perfectly into one and two
REV
instructions, respectively.AVR32
There is not much to report here. The latest gcc version available is 4.2.4, which doesn’t appear to have the bswap functions. The other three are handled nicely, even using a bit-reverse for
__builtin_ctz
.MIPS / MIPS64
The situation MIPS is similar to ARM. Both bswap builtins result in external libgcc calls, the rest giving sensible code.
PowerPC
I scarcely believe my eyes, but this one is actually not bad. The PowerPC has no byte-reversal instructions, yet someone seems to have taken the time to teach gcc a good instruction sequence for this operation. The PowerPC does have some powerful rotate-and-mask instructions which come in handy here. First the 32-bit version :
rotlwi r0,r3,8 rlwimi r0,r3,24,0,7 rlwimi r0,r3,24,16,23 mr r3,r0 blr
The 64-bit byte-reversal simply applies the above code on each half of the value :
rotlwi r0,r3,8 rlwimi r0,r3,24,0,7 rlwimi r0,r3,24,16,23 rotlwi r3,r4,8 rlwimi r3,r4,24,0,7 rlwimi r3,r4,24,16,23 mr r4,r0 blr
Although I haven’t analysed that code carefully, it looks pretty good.
PowerPC64
Doing 64-bit operations is easier on a 64-bit CPU, right ? For you and me perhaps, but not for gcc. Here
__builtin_bswap64
gives us the now familiar__bswapdi2
call, and while not as bad as the ARM version, it is not pretty :rldicr r0,r3,8,55 rldicr r10,r3,56,7 rldicr r0,r0,56,15 rldicl r11,r3,8,56 rldicr r9,r3,16,47 or r11,r10,r11 rldicr r9,r9,48,23 rldicl r10,r0,24,40 rldicr r0,r3,24,39 or r11,r11,r10 rldicl r9,r9,40,24 rldicr r0,r0,40,31 or r9,r11,r9 rlwinm r10,r3,0,0,7 rldicl r0,r0,56,8 or r0,r9,r0 rldicr r10,r10,8,55 rlwinm r11,r3,0,8,15 or r0,r0,r10 rldicr r11,r11,24,39 rlwinm r3,r3,0,16,23 or r0,r0,r11 rldicr r3,r3,40,23 or r3,r0,r3 blr
That is 6 times longer than the (presumably) hand-written 32-bit version.
x86 / x86_64
As one might expect, results on x86 are good. All the tested functions use the available special instructions. One word of caution though : the bit-counting instructions are very slow on some implementations, specifically the Atom, AMD chips, and the notoriously slow Pentium4E.
Conclusion
In conclusion, I would say gcc builtins can be useful to avoid fragile inline assembler. Before using them, however, one should make sure they are not in fact harmful on the required targets. Not even those builtins mapping directly to CPU instructions can be trusted.
-
Inside WebM Technology : VP8 Intra and Inter Prediction
20 juillet 2010, par noreply@blogger.com (Lou Quillio)Continuing our series on WebM technology, I will discuss the use of prediction methods in the VP8 video codec, with special attention to the TM_PRED and SPLITMV modes, which are unique to VP8.First, some background. To encode a video frame, block-based codecs such as VP8 first divide the frame into smaller segments called macroblocks. Within each macroblock, the encoder can predict redundant motion and color information based on previously processed blocks. The redundant data can be subtracted from the block, resulting in more efficient compression.
Image by Fido Factor, licensed under Creative Commons Attribution License.
Based on a work at www.flickr.comA VP8 encoder uses two classes of prediction :- Intra prediction uses data within a single video frame
- Inter prediction uses data from previously encoded frames
The residual signal data is then encoded using other techniques, such as transform coding.VP8 Intra Prediction ModesVP8 intra prediction modes are used with three types of macroblocks :- 4x4 luma
- 16x16 luma
- 8x8 chroma
Four common intra prediction modes are shared by these macroblocks :- H_PRED (horizontal prediction). Fills each column of the block with a copy of the left column, L.
- V_PRED (vertical prediction). Fills each row of the block with a copy of the above row, A.
- DC_PRED (DC prediction). Fills the block with a single value using the average of the pixels in the row above A and the column to the left of L.
- TM_PRED (TrueMotion prediction). A mode that gets its name from a compression technique developed by On2 Technologies. In addition to the row A and column L, TM_PRED uses the pixel P above and to the left of the block. Horizontal differences between pixels in A (starting from P) are propagated using the pixels from L to start each row.
For 4x4 luma blocks, there are six additional intra modes similar to V_PRED and H_PRED, but correspond to predicting pixels in different directions. These modes are outside the scope of this post, but if you want to learn more see the VP8 Bitstream Guide.As mentioned above, the TM_PRED mode is unique to VP8. The following figure uses an example 4x4 block of pixels to illustrate how the TM_PRED mode works :Where C, As and Ls represent reconstructed pixel values from previously coded blocks, and X00 through X33 represent predicted values for the current block. TM_PRED uses the following equation to calculate Xij :Xij = Li + Aj - C (i, j=0, 1, 2, 3)Although the above example uses a 4x4 block, the TM_PRED mode for 8x8 and 16x16 blocks works in the same fashion.TM_PRED is one of the more frequently used intra prediction modes in VP8, and for common video sequences it is typically used by 20% to 45% of all blocks that are intra coded. Overall, together with other intra prediction modes, TM_PRED helps VP8 to achieve very good compression efficiency, especially for key frames, which can only use intra modes (key frames by their very nature cannot refer to previously encoded frames).VP8 Inter Prediction ModesIn VP8, inter prediction modes are used only on inter frames (non-key frames). For any VP8 inter frame, there are typically three previously coded reference frames that can be used for prediction. A typical inter prediction block is constructed using a motion vector to copy a block from one of the three frames. The motion vector points to the location of a pixel block to be copied. In most video compression schemes, a good portion of the bits are spent on encoding motion vectors ; the portion can be especially large for video encoded at lower datarates.Like previous VPx codecs, VP8 encodes motion vectors very efficiently by reusing vectors from neighboring macroblocks (a macroblock includes one 16x16 luma block and two 8x8 chroma blocks). VP8 uses a similar strategy in the overall design of inter prediction modes. For example, the prediction modes "NEAREST" and "NEAR" make use of last and second-to-last, non-zero motion vectors from neighboring macroblocks. These inter prediction modes can be used in combination with any of the three different reference frames.In addition, VP8 has a very sophisticated, flexible inter prediction mode called SPLITMV. This mode was designed to enable flexible partitioning of a macroblock into sub-blocks to achieve better inter prediction. SPLITMV is very useful when objects within a macroblock have different motion characteristics. Within a macroblock coded using SPLITMV mode, each sub-block can have its own motion vector. Similar to the strategy of reusing motion vectors at the macroblock level, a sub-block can also use motion vectors from neighboring sub-blocks above or left to the current block. This strategy is very flexible and can effectively encode any shape of sub-macroblock partitioning, and does so efficiently. Here is an example of a macroblock with 16x16 luma pixels that is partitioned to 16 4x4 blocks :where New represents a 4x4 bock coded with a new motion vector, and Left and Above represent a 4x4 block coded using the motion vector from the left and above, respectively. This example effectively partitions the 16x16 macroblock into 3 different segments with 3 different motion vectors (represented below by 1, 2 and 3) :Through effective use of intra and inter prediction modes, WebM encoder implementations can achieve great compression quality on a wide range of source material. If you want to delve further into VP8 prediction modes, read the VP8 Bitstream Guide or examine the reconintra.c and rdopt.c files in the VP8 source tree.Yaowu Xu, Ph.D. is a codec engineer at Google.