Arquitetura do sistema de negociação de baixa latência


A página não pode ser encontrada.


Por favor, tente o seguinte:


Certifique-se de que o endereço do site exibido na barra de endereços do seu navegador esteja digitado e formatado corretamente. Se você acessou esta página clicando em um link, entre em contato com o administrador do site para alertá-lo de que o link está incorretamente formatado. Clique no botão Voltar para tentar outro link.


Erro HTTP 404 - Arquivo ou diretório não encontrado.


Serviços de Informações da Internet (IIS)


Informações técnicas (para o pessoal de suporte)


Vá para o Atendimento Microsoft e execute uma pesquisa de título para as palavras HTTP e 404. Abra a Ajuda do IIS, que pode ser acessada no Gerenciador do IIS (inetmgr), e pesquise por tópicos denominados Configuração do Site, Tarefas Administrativas Comuns e Sobre Mensagens de Erro Personalizadas.


11 Melhores práticas para sistemas de baixa latência.


Já faz 8 anos desde que o Google notou que um acréscimo de 500 ms de latência reduziu o tráfego em 20% e a Amazon percebeu que os 100ms de latência extra diminuíram as vendas em 1%. Desde então, os desenvolvedores têm corrido para o fim da curva de latência, culminando em desenvolvedores front-end apertando até o último milésimo de segundo em JavaScript, CSS e até HTML. O que se segue é um passeio aleatório através de uma variedade de práticas recomendadas para manter em mente ao projetar sistemas de baixa latência. A maioria dessas sugestões é levada ao extremo lógico, mas é claro que compensações podem ser feitas. (Obrigado a um usuário anônimo por fazer essa pergunta no Quora e fazer com que eu descreva por escrito).


Escolha o idioma certo.


Linguagens de script não precisam ser aplicadas. Embora eles continuem sendo cada vez mais rápidos, quando você está tentando reduzir os últimos milésimos de segundo do seu tempo de processamento, não pode ter a sobrecarga de uma linguagem interpretada. Além disso, você vai querer um modelo de memória forte para permitir a programação livre de bloqueio, então você deve estar olhando para Java, Scala, C ++ 11 ou Go.


Guarde tudo na memória.


AE / S matará sua latência, portanto, verifique se todos os seus dados estão na memória. Isso geralmente significa gerenciar suas próprias estruturas de dados na memória e manter um log persistente, para que você possa reconstruir o estado após a reinicialização de uma máquina ou processo. Algumas opções para um log persistente incluem Bitcask, Krati, LevelDB e BDB-JE. Como alternativa, você pode conseguir executar um banco de dados local persistente na memória, como redis ou MongoDB (com dados de memória & gt; & gt;). Observe que você pode perder alguns dados no travamento devido à sincronização em segundo plano com o disco.


Mantenha dados e processando colocated.


Os saltos de rede são mais rápidos do que as buscas de disco, mas mesmo assim eles adicionam muita sobrecarga. Idealmente, seus dados devem caber inteiramente na memória de um host. Com a AWS fornecendo quase 1/4 TB de RAM na nuvem e servidores físicos oferecendo vários TBs, isso geralmente é possível. Se você precisar executar em mais de um host, certifique-se de que seus dados e solicitações sejam particionados adequadamente para que todos os dados necessários para atender a uma determinada solicitação estejam disponíveis localmente.


Mantenha o sistema subutilizado.


A baixa latência requer sempre recursos para processar a solicitação. Não tente executar no limite do que seu hardware / software pode fornecer. Sempre tem muita sala de cabeça para rajadas e depois alguns.


Mantenha as opções de contexto no mínimo.


Interruptores de contexto são um sinal de que você está fazendo mais trabalho de computação do que você tem recursos para. Você desejará limitar o número de threads ao número de núcleos em seu sistema e fixar cada thread em seu próprio núcleo.


Mantenha suas leituras sequenciais.


Todas as formas de armazenamento, independentemente de serem rotacionais, baseadas em flash ou memória, são significativamente melhores quando usadas sequencialmente. Ao emitir leituras sequenciais para a memória, você aciona o uso da pré-busca no nível da RAM, bem como no nível do cache da CPU. Se feito corretamente, o próximo dado que você precisa estará sempre no cache L1 antes de você precisar. A maneira mais fácil de ajudar esse processo é fazer uso intenso de matrizes de tipos de dados primitivos ou estruturas. Os indicadores a seguir, seja pelo uso de listas vinculadas ou por meio de matrizes de objetos, devem ser evitados a todo custo.


Lote suas gravações.


Isso parece contra-intuitivo, mas você pode obter melhorias significativas no desempenho por meio de gravações em lote. No entanto, há um equívoco de que isso significa que o sistema deve esperar uma quantidade arbitrária de tempo antes de fazer uma gravação. Em vez disso, um thread deve girar em um loop apertado fazendo I / O. Cada gravação agrupará todos os dados que chegaram desde a última gravação. Isso contribui para um sistema muito rápido e adaptável.


Respeite seu cache.


Com todas essas otimizações, o acesso à memória rapidamente se torna um gargalo. A fixação de threads em seus próprios núcleos ajuda a reduzir a poluição do cache da CPU e a E / S sequencial também ajuda a pré-carregar o cache. Além disso, você deve manter os tamanhos de memória baixos usando tipos de dados primitivos para que mais dados caibam no cache. Além disso, você pode examinar os algoritmos alheios ao cache que funcionam dividindo os dados recursivamente até que eles caibam no cache e, em seguida, executem qualquer processamento necessário.


Não bloqueando tanto quanto possível.


Faça amigos com o não bloqueio e aguarde estruturas e algoritmos de dados livres. Toda vez que você usa um bloqueio, você precisa descer a pilha para o sistema operacional para mediar o bloqueio, o que é uma enorme sobrecarga. Muitas vezes, se você sabe o que está fazendo, pode contornar os bloqueios entendendo o modelo de memória da JVM, C ++ 11 ou Go.


Assíncrono, tanto quanto possível.


Qualquer processamento e, particularmente, qualquer I / O que não seja absolutamente necessário para construir a resposta deve ser feito fora do caminho crítico.


Paralelize o máximo possível.


Qualquer processamento e, particularmente, qualquer E / S que possa acontecer em paralelo, deve ser feito em paralelo. Por exemplo, se sua estratégia de alta disponibilidade incluir transações de registro em disco e enviar transações para um servidor secundário, essas ações podem ocorrer em paralelo.


Quase tudo isso vem de seguir o que a LMAX está fazendo com seu projeto Disruptor. Leia sobre isso e siga qualquer coisa que Martin Thompson faça.


Compartilhar isso:


Publicado por.


Benjamin Darfler.


29 pensamentos sobre “11 Melhores Práticas para Sistemas de Baixa Latência”;


E feliz por estar na sua lista 🙂


Bom artigo. Um beef: Go não possui um modelo de memória sofisticado como o Java ou o C ++ 11. Se o seu sistema se encaixa na rotina Go-Routing e nos canais, é tudo muito bom, sem sorte. AFAIK não é possível desativar o planejador de tempo de execução, portanto, não há encadeamentos nativos do sistema operacional e a capacidade de criar suas próprias estruturas de dados livres de bloqueio, como (filas / buffers de anel do SPSC), também estão muito ausentes.


Obrigado pela resposta. Embora o modelo de memória Go (golang / ref / mem) possa não ser tão robusto quanto o Java ou C ++ 11, fiquei com a impressão de que você ainda poderia criar estruturas de dados livres de bloqueio usando-o. Por exemplo, github / textnode / gringo, github / scryner / lfreequeue e github / mocchira / golfhash. Talvez eu esteja perdendo alguma coisa? É certo que eu sei muito menos sobre o Go do que a JVM.


Benjamin, o modelo de memória Go detalhado aqui: golang / ref / mem é principalmente em termos de canais e exclusões mútuas. Eu olhei através dos pacotes que você listou e enquanto as estruturas de dados existem "lock free & # 8221; eles não são equivalentes ao que se pode construir em Java / C ++ 11. O pacote de sincronização a partir de agora, não tem suporte para atomics relaxado ou a semântica de adquirir / liberar do C ++ 11. Sem esse suporte, é difícil construir estruturas de dados SPSC tão eficientes quanto as possíveis em C ++ / Java. Os projetos que você vincula usam atomic. Add & # 8230; que é um atômico sequencialmente consistente. Ele é construído com XADD como deveria ser & # 8211; github / tonnerre / golang / blob / mestre / src / pkg / sync / atomic / asm_amd64.s.


Eu não estou tentando derrubar o Go down. É necessário um esforço mínimo para escrever I / O assíncrono e simultâneo.


código que é suficientemente rápido para a maioria das pessoas. A biblioteca std também é altamente ajustada para desempenho. Golang também tem suporte para estruturas que estão faltando em Java. Mas, do jeito que está, acho que o modelo de memória simplista e o tempo de execução da rotina ficam no caminho da construção do tipo de sistema sobre o qual você está falando.


Obrigado pela resposta em profundidade. Espero que as pessoas achem isso útil.


Enquanto um & # 8216; nativo & # 8217; a língua é provavelmente melhor, não é estritamente necessária. O Facebook nos mostrou que isso pode ser feito em PHP. Concedido que eles usam PHP pré-compilado com sua máquina HHVM. Mas é possível!


Infelizmente, o PHP ainda não possui um modelo de memória aceitável, mesmo se o HHVM melhorar significativamente a velocidade de execução.


Enquanto eu luto para usar idiomas de nível mais alto tanto quanto o próximo, acho que a única maneira de alcançar os aplicativos de baixa latência que as pessoas estão procurando é cair para uma linguagem como C. Parece que mais difícil é escrever em um idioma, quanto mais rápido ele é executado.


Eu recomendo fortemente que você olhe para o trabalho que está sendo feito nos projetos e blogs aos quais eu me vinculei. A JVM está se tornando rapidamente o ponto de acesso para esses tipos de sistemas, pois fornece um modelo de memória e coleta de lixo robustos que permitem a programação livre de bloqueios quase impossível com um modelo de memória fraco ou indefinido e contadores de referência para gerenciamento de memória.


Vou dar uma olhada, Benjamin. Obrigado por indicá-los.


A coleta de lixo para programação sem bloqueio é um pouco de deus ex machina. As filas de MPMC e SPSC podem ser construídas sem necessidade de GC. Há também muitas maneiras de fazer a programação livre de bloqueios sem coleta de lixo e a contagem de referência não é o único caminho. Ponteiros de perigo, RCU, Proxy-Collectors etc todos fornecem suporte para recuperação diferida e geralmente são codificados em suporte de um algoritmo (não genérico), portanto, eles são geralmente muito mais fáceis de construir. É claro que o trade-off está no fato de que os GC de qualidade de produção têm muito trabalho neles e ajudarão os programadores menos experientes a escrever algoritmos livres de bloqueio (se eles estiverem fazendo isso?) Sem codificar esquemas de recuperação diferidos. . Alguns links sobre o trabalho realizado neste campo: cs. toronto. edu/


Sim C / C ++ apenas recentemente ganhou um modelo de memória, mas isso não significa que eles eram completamente inadequados para código sem bloqueio anteriormente. O GCC e outros compiladores de alta qualidade tinham diretivas específicas do compilador para fazer a programação livre de bloqueios em plataformas suportadas por um tempo muito longo & # 8211; simplesmente não foi padronizado na linguagem. O Linux e outras plataformas forneceram essas primitivas por algum tempo também. A posição única do Java era que ele fornecia um modelo de memória formalizado que garantia trabalhar em todas as plataformas suportadas. Embora em princípio isso seja incrível, a maioria dos desenvolvedores do lado do servidor trabalha em uma plataforma (Linux / Windows). Eles já tinham as ferramentas para criar código livre de bloqueio para sua plataforma.


GC é uma ótima ferramenta, mas não é necessária. Tem um custo tanto em termos de desempenho quanto em complexidade (todos os truques necessários para evitar o STW GC). O C ++ 11 / C11 já tem suporte para modelos de memória adequados. Não esqueçamos que as JVMs não têm responsabilidade de suportar a API insegura no futuro. Código não seguro é "inseguro" # 8221; então você perde os benefícios dos recursos de segurança do Java. Finalmente IMO o código Unsafe usado para layout de memória e simular estruturas em Java parece muito mais feio do que estruturas C / C ++ onde o compilador está fazendo esse trabalho para você de uma maneira confiável. C e C ++ também fornecem acesso a todas as ferramentas elétricas específicas de plataforma de baixo nível, como PAUSE ins, SSE / AVX / NEON etc. Você pode até mesmo ajustar seu layout de código através de scripts de linker! A energia fornecida pela cadeia de ferramentas C / C ++ é realmente incomparável pela JVM. Java é uma ótima plataforma, mas eu acho que a maior vantagem é que a lógica de negócios comum (90% do seu código?) Ainda pode depender dos recursos de GC e segurança e fazer uso de bibliotecas altamente ajustadas e testadas. com inseguro. Este é um ótimo trade-off entre obter os últimos 5% de perf e ser produtivo. Um trade-off que faz sentido para muitas pessoas, mas um trade-off, no entanto. Escrever código de aplicativo complicado em C / C ++ é um pesadelo afinal.


Em seg, 10 de março de 2014 às 12:52, CodeDependents escreveu:


& gt; Graham Swan comentou: "Vou dar uma olhada, Benjamin. Obrigado por & gt; apontando-os para fora. & # 8221;


Faltando o 12º: Não use idiomas coletados garbadge. O GC é um gargalo no pior cenário. É provável que pare todos os threads. É um global. Ele distrai o arquiteto a gerenciar um dos recursos mais comuns (memória próxima à CPU).


Na verdade, muito desse trabalho vem diretamente do Java. Para fazer a programação livre de bloqueios, você precisa de um modelo de memória clara que o c ++ apenas recentemente ganhou. Se você sabe como trabalhar com o GC e não contra ele, pode criar sistemas de baixa latência com muito mais facilidade.


Eu tenho que concordar com Ben aqui. Houve muito progresso no paralelismo do GC na última década, com o colecionador do G1 sendo o mais recente encantamento. Pode demorar um pouco para ajustar o heap e vários botões para que o GC seja coletado com quase nenhuma pausa, mas isso é pouco em comparação com o tempo que o desenvolvedor leva para não ter o GC.


Você pode até mesmo dar um passo além e criar sistemas que produzam tão pouco lixo que você pode facilmente empurrar seu GC para fora de sua janela de operação. É assim que todas as lojas de negociação de alta frequência fazem isso quando executadas na JVM.


A coleta de lixo para programação sem bloqueio é um pouco de deus ex machina. As filas de MPMC e SPSC podem ser construídas sem necessidade de GC. Há também muitas maneiras de fazer a programação livre de bloqueios sem coleta de lixo e a contagem de referência não é o único caminho. Ponteiros de perigo, RCU, Proxy-Collectors etc todos fornecem suporte para recuperação diferida e são codificados em suporte de um algoritmo (não genérico), portanto, eles são muito mais fáceis de construir. É claro que o trade-off está no fato de que os GC de qualidade de produção têm muito trabalho neles e ajudarão os programadores menos experientes a escrever algoritmos livres de bloqueio (se eles estiverem fazendo isso?) Sem codificar esquemas de recuperação diferidos. . Alguns links sobre o trabalho realizado neste campo: cs. toronto. edu/


Sim C / C ++ apenas recentemente ganhou um modelo de memória, mas isso não significa que eles eram completamente inadequados para código sem bloqueio anteriormente. O GCC e outros compiladores de alta qualidade tinham diretivas específicas do compilador para fazer a programação livre de bloqueios em plataformas suportadas por um tempo muito longo & # 8211; simplesmente não foi padronizado na linguagem. O Linux e outras plataformas forneceram essas primitivas por algum tempo também. A posição única do Java era que ele fornecia um modelo de memória formalizado que garantia trabalhar em todas as plataformas suportadas. Embora em princípio isso seja incrível, a maioria dos desenvolvedores do lado do servidor trabalha em uma plataforma (Linux / Windows). Eles já tinham as ferramentas para criar código livre de bloqueio para sua plataforma.


GC é uma ótima ferramenta, mas não é necessária. Tem um custo tanto em termos de desempenho quanto em complexidade (todos os truques necessários para atrasar e evitar o STW GC). O C ++ 11 / C11 já tem suporte para modelos de memória adequados. Não esqueçamos que as JVMs não têm responsabilidade de suportar a API insegura no futuro. Código não seguro é "inseguro" # 8221; então você perde os benefícios dos recursos de segurança do Java. Finalmente IMO o código Unsafe usado para layout de memória e simular estruturas em Java parece muito mais feio do que estruturas C / C ++ onde o compilador está fazendo esse trabalho para você de uma maneira confiável. C e C ++ também fornecem acesso a todas as ferramentas elétricas específicas de plataforma de baixo nível, como PAUSE ins, SSE / AVX / NEON etc. Você pode até mesmo ajustar seu layout de código através de scripts de linker! A energia fornecida pela cadeia de ferramentas C / C ++ é realmente incomparável pela JVM. Java é uma ótima plataforma, mas eu acho que a maior vantagem é que a lógica de negócios comum (90% do seu código?) Ainda pode depender dos recursos de GC e segurança e fazer uso de bibliotecas altamente ajustadas e testadas. com inseguro. Este é um ótimo trade-off entre obter os últimos 5% de perf e ser produtivo. Um trade-off que faz sentido para muitas pessoas, mas um trade-off, no entanto. Escrever código de aplicativo complicado em C / C ++ é um pesadelo afinal.


& gt; Não use idiomas coletados garbadge.


Ou, pelo menos, & # 8220; tradicional & # 8221; lixo coletado idiomas. Porque eles são diferentes & # 8211; enquanto o Erlang também tem um coletor, ele não cria gargalos porque ele não pára o mundo & # 8221; como Java ao coletar lixo & # 8211; em vez disso, ele interrompe micro-threads individuais pequenas & # 8221; em uma escala de microssegundos, por isso não é perceptível no grande.


Reescreva isso para o & # 8220; tradicional & # 8221; coleta de lixo [i] algoritmos [/ i]. Na LMAX, usamos o Azul Zing e, usando apenas uma JVM diferente com uma abordagem diferente para a coleta de lixo, obtivemos melhorias enormes no desempenho, porque tanto os GC principais quanto os menores têm ordens de magnitude mais baratas.


Existem outros custos que compensam isso, é claro: você usa muito mais, e o Zing não é barato.


Reblogou isso em Java Prorgram Examples e comentou:


Um dos artigos de leitura obrigatória para programadores Java, é a lição que você aprenderá depois de passar um tempo considerável ajustando e desenvolvendo sistemas de baixa latência em Java em 10 minutos.


Revivendo um fio antigo, mas (surpreendentemente) isso tem que ser apontado:


1) Linguagens de nível superior (por exemplo, Java) não provocam funcionalidade do hardware que não está disponível para linguagens de nível inferior (por exemplo, C); afirmar que tal e tal é "impossível" # 8221; em C, embora prontamente realizável em Java, é um lixo completo sem reconhecer que o Java é executado em hardware virtual, em que a JVM precisa sintetizar a funcionalidade exigida pelo Java, mas não fornecida pelo hardware físico. Se uma JVM (por exemplo, escrita em C) pode sintetizar a funcionalidade X, então também pode um programador C.


2) & # 8220; Bloqueio livre & # 8221; Não é o que as pessoas pensam, exceto quase por coincidência em certas circunstâncias, como x86 de núcleo único; O x86 multicore não pode ser executado sem bloqueio sem barreiras de memória, que têm complexidades e custos semelhantes ao bloqueio regular. Como por 1 acima, se o lock free funciona em um determinado ambiente, é porque ele é suportado pelo hardware ou emulado / sintetizado pelo software em um ambiente virtual.


Grandes Pontos Julius. O ponto que eu estava tentando (talvez sem sucesso) é que é proibitivamente difícil aplicar muitos desses padrões em C, já que eles confiam no GC. Vai além de simplesmente usar barreiras de memória. Você tem que considerar a liberdade de memória, o que fica particularmente difícil quando você está lidando com o bloqueio livre e esperar algoritmos livres. É aqui que a GC adiciona uma grande vitória. Dito isso, ouvi dizer que Rust tem algumas idéias muito interessantes sobre a propriedade da memória que podem começar a resolver alguns desses problemas.


A arquitetura LMAX.


LMAX é uma nova plataforma de negociação financeira de varejo. Como resultado, tem que processar muitos negócios com baixa latência. O sistema é construído na plataforma JVM e centraliza em um Business Logic Processor que pode manipular 6 milhões de pedidos por segundo em um único encadeamento. O Business Logic Processor é executado inteiramente na memória usando o sourcing de eventos. O Business Logic Processor é cercado por disruptores - um componente de simultaneidade que implementa uma rede de filas que operam sem precisar de bloqueios. Durante o processo de design, a equipe concluiu que as direções recentes em modelos de simultaneidade de alto desempenho usando filas são fundamentalmente incompatíveis com o design moderno da CPU.


Nos últimos anos, continuamos ouvindo que "o almoço grátis acabou" [1] - não podemos esperar aumentos na velocidade da CPU individual. Portanto, para escrever código rápido, precisamos usar explicitamente vários processadores com software simultâneo. Isto não é uma boa notícia - escrever código concorrente é muito difícil. Bloqueios e semáforos são difíceis de avaliar e difíceis de testar - o que significa que estamos gastando mais tempo nos preocupando em satisfazer o computador do que resolvendo o problema de domínio. Vários modelos de concorrência, como Atores e Software de Memória Transacional, visam tornar isso mais fácil - mas ainda há um fardo que introduz erros e complexidade.


Então, fiquei fascinado ao ouvir sobre uma palestra na QCon London, em março do ano passado, da LMAX. LMAX é uma nova plataforma de negociação financeira de varejo. Sua inovação nos negócios é que é uma plataforma de varejo - permitindo que qualquer pessoa negocie uma variedade de produtos financeiros derivativos [2]. Uma plataforma de negociação como essa precisa de latência muito baixa - as negociações precisam ser processadas rapidamente porque o mercado está se movimentando rapidamente. Uma plataforma de varejo adiciona complexidade porque tem que fazer isso para muitas pessoas. Assim, o resultado é mais usuários, com muitos negócios, os quais precisam ser processados ​​rapidamente. [3]


Dada a mudança para o pensamento multi-core, esse tipo de desempenho exigente naturalmente sugeriria um modelo de programação explicitamente concorrente - e, na verdade, esse era o ponto de partida deles. Mas o que chamou a atenção das pessoas na QCon foi que não era onde elas terminavam. Na verdade, eles acabaram fazendo toda a lógica de negócios para sua plataforma: todos os negócios, de todos os clientes, em todos os mercados - em um único segmento. Um thread que processará 6 milhões de pedidos por segundo usando hardware de commodity. [4]


Processamento de lotes de transações com baixa latência e nenhuma das complexidades do código simultâneo - como posso resistir a investigar isso? Felizmente, outra diferença que a LMAX tem para outras empresas financeiras é que elas estão muito felizes em falar sobre suas decisões tecnológicas. Então, agora, a LMAX está em produção há algum tempo, e é hora de explorar seu design fascinante.


Estrutura geral.


Figura 1: Arquitetura do LMAX em três blobs.


Em um nível superior, a arquitetura tem três partes.


processador de lógica de negócios [5] disruptores de saída de disruptor de entrada.


Como o próprio nome indica, o processador de lógica de negócios manipula toda a lógica de negócios no aplicativo. Como indiquei acima, ele faz isso como um programa java de encadeamento único que reage a chamadas de método e produz eventos de saída. Conseqüentemente, é um programa Java simples que não requer que nenhuma estrutura de plataforma seja executada, além da própria JVM, o que permite que ela seja executada facilmente em ambientes de teste.


Embora o Business Logic Processor possa ser executado em um ambiente simples para teste, há uma coreografia mais envolvida para que ele seja executado em uma configuração de produção. As mensagens de entrada precisam ser retiradas de um gateway de rede e desempacotadas, replicadas e registradas em diário. Mensagens de saída precisam ser empacotadas para a rede. Essas tarefas são tratadas pelos disruptores de entrada e saída. Ao contrário do Business Logic Processor, esses são componentes simultâneos, pois envolvem operações de E / S que são lentas e independentes. Eles foram projetados e construídos especialmente para o LMAX, mas eles (como a arquitetura geral) são aplicáveis ​​em outros lugares.


Processador de lógica de negócios.


Mantendo tudo na memória.


O Business Logic Processor utiliza as mensagens de entrada sequencialmente (na forma de uma chamada de método), executa a lógica de negócios e emite eventos de saída. Ele opera inteiramente na memória, não há banco de dados ou outro armazenamento persistente. Manter todos os dados na memória tem dois benefícios importantes. Em primeiro lugar, é rápido - não há banco de dados para fornecer E / S lento para acessar, nem há qualquer comportamento transacional a ser executado, já que todo o processamento é feito sequencialmente. A segunda vantagem é que simplifica a programação - não há mapeamento objeto / relacional para fazer. Todo o código pode ser escrito usando o modelo de objeto do Java sem ter que comprometer o mapeamento para um banco de dados.


Usar uma estrutura na memória tem uma consequência importante - o que acontece se tudo falhar? Até mesmo os sistemas mais resilientes são vulneráveis ​​a alguém que puxa o poder. O coração de lidar com isso é o Event Sourcing - o que significa que o estado atual do Business Logic Processor é totalmente derivável, processando os eventos de entrada. Contanto que o fluxo de eventos de entrada seja mantido em um armazenamento durável (que é uma das tarefas do disruptor de entrada), você sempre poderá recriar o estado atual do mecanismo de lógica de negócios reproduzindo os eventos.


Uma boa maneira de entender isso é pensar em um sistema de controle de versão. Os sistemas de controle de versão são uma sequência de confirmações, a qualquer momento, você pode criar uma cópia de trabalho aplicando esses commits. Os VCSs são mais complicados que o Business Logic Processor porque eles devem suportar ramificação, enquanto o Business Logic Processor é uma sequência simples.


Portanto, em teoria, você sempre pode reconstruir o estado do Processador de lógica de negócios reprocessando todos os eventos. Na prática, no entanto, isso levaria muito tempo caso você precisasse criar um. Portanto, assim como ocorre com os sistemas de controle de versão, o LMAX pode criar instantâneos do estado do Business Logic Processor e restaurar a partir dos instantâneos. Eles tiram uma foto toda noite durante os períodos de baixa atividade. Reiniciar o Business Logic Processor é rápido, uma reinicialização completa - incluindo a reinicialização da JVM, o carregamento de um instantâneo recente e a repetição de dias de periódicos - leva menos de um minuto.


Os instantâneos tornam o início de um novo Processador de lógica de negócios mais rápido, mas não com rapidez suficiente, caso um Processador de lógica de negócios falhe às 14h. Como resultado, o LMAX mantém vários processadores de lógica de negócios em execução o tempo todo [6]. Cada evento de entrada é processado por vários processadores, mas todos os processadores, exceto um, são ignorados. Se o processador ao vivo falhar, o sistema mudará para outro. Essa capacidade de lidar com o failover é outro benefício do uso do Sourcing de Eventos.


Por meio do fornecimento de eventos em réplicas, eles podem alternar entre processadores em questão de microssegundos. Além de tirar instantâneos todas as noites, eles também reiniciam os Processadores de lógica de negócios todas as noites. A replicação permite que eles façam isso sem tempo de inatividade, portanto, eles continuam processando negociações 24 horas por dia, 7 dias por semana.


Para mais informações sobre o Event Sourcing, veja o padrão de rascunho no meu site de alguns anos atrás. O artigo é mais focado em lidar com relacionamentos temporais do que com os benefícios que o LMAX usa, mas explica a ideia central.


A terceirização de eventos é valiosa porque permite que o processador seja executado inteiramente na memória, mas tem outra vantagem considerável para o diagnóstico. Se ocorrer algum comportamento inesperado, a equipe copia a sequência de eventos para o ambiente de desenvolvimento e os reproduz novamente. Isso permite que eles examinem o que aconteceu com muito mais facilidade do que é possível na maioria dos ambientes.


Esse recurso de diagnóstico se estende ao diagnóstico de negócios. Existem algumas tarefas de negócios, como no gerenciamento de riscos, que exigem uma computação significativa que não é necessária para o processamento de pedidos. Um exemplo é obter uma lista dos 20 principais clientes por perfil de risco com base em suas posições de negociação atuais. A equipe lida com isso criando um modelo de domínio replicado e realizando a computação nesse local, onde não interferirá no processamento da ordem principal. Esses modelos de domínio de análise podem ter modelos de dados variantes, manter conjuntos de dados diferentes na memória e serem executados em máquinas diferentes.


Desempenho de ajuste.


Até agora eu expliquei que a chave para a velocidade do Business Logic Processor é fazer tudo sequencialmente, na memória. Basta fazer isso (e nada realmente estúpido) permite que os desenvolvedores escrevam código que pode processar 10K TPS [7]. Eles então descobriram que concentrar-se nos elementos simples do código bom poderia trazer isso para a faixa de 100K TPS. Isso só precisa de código bem-fatorado e métodos pequenos - essencialmente, isso permite que a Hotspot faça um melhor trabalho de otimização e que as CPUs sejam mais eficientes no armazenamento em cache do código enquanto ele está sendo executado.


Demorou um pouco mais de inteligência para subir outra ordem de grandeza. Há várias coisas que a equipe do LMAX achou úteis para chegar lá. Uma delas era escrever implementações customizadas das coleções java que foram projetadas para serem amigáveis ​​ao cache e cuidadosas com o lixo [8]. Um exemplo disso é usar longevas java primitivas como chaves hashmap com uma implementação Map suportada por matriz especialmente gravada (LongToObjectHashMap). Em geral, eles descobriram que a escolha de estruturas de dados geralmente faz uma grande diferença. Muitos programadores simplesmente pegam qualquer lista que usaram da última vez, em vez de pensar qual implementação é a correta para esse contexto. [9]


Outra técnica para alcançar esse nível superior de desempenho é colocar a atenção nos testes de desempenho. Há muito tempo percebi que as pessoas falam muito sobre técnicas para melhorar o desempenho, mas a única coisa que realmente faz a diferença é testá-lo. Mesmo bons programadores são muito bons em construir argumentos de desempenho que acabam errados, então os melhores programadores preferem profilers e casos de teste à especulação. [10] A equipe LMAX também descobriu que escrever testes primeiro é uma disciplina muito eficaz para testes de desempenho.


Modelo de Programação.


Esse estilo de processamento introduz algumas restrições na maneira como você escreve e organiza a lógica de negócios. A primeira delas é que você precisa descobrir qualquer interação com serviços externos. Uma chamada de serviço externo será lenta e, com um único encadeamento, interromperá toda a máquina de processamento de pedidos. Como resultado, você não pode fazer chamadas para serviços externos dentro da lógica de negócios. Em vez disso, você precisa concluir essa interação com um evento de saída e aguardar outro evento de entrada para recuperá-lo novamente.


Vou usar um exemplo simples não-LMAX para ilustrar. Imagine que você está fazendo uma encomenda de jujubas por cartão de crédito. Um sistema simples de varejo levaria as informações do seu pedido, usaria um serviço de validação do cartão de crédito para verificar o número do seu cartão de crédito e, então, confirmaria seu pedido - tudo em uma única operação. O encadeamento processando seu pedido seria bloqueado enquanto aguardava a verificação do cartão de crédito, mas esse bloqueio não seria muito longo para o usuário, e o servidor sempre pode executar outro encadeamento no processador enquanto aguarda.


Na arquitetura LMAX, você dividiria essa operação em dois. A primeira operação capturaria as informações do pedido e terminaria com a saída de um evento (validação do cartão de crédito solicitada) para a empresa do cartão de crédito. O Business Logic Processor continuaria processando eventos para outros clientes até receber um evento validado por cartão de crédito em seu fluxo de eventos de entrada. Ao processar esse evento, ele executaria as tarefas de confirmação para esse pedido.


Trabalhar nesse tipo de estilo assíncrono e orientado a eventos é algo incomum - embora o uso de assincronia para melhorar a capacidade de resposta de um aplicativo seja uma técnica familiar. Isso também ajuda o processo de negócios a ser mais resiliente, já que você precisa ser mais explícito ao pensar sobre as diferentes coisas que podem acontecer com o aplicativo remoto.


Um segundo recurso do modelo de programação está no tratamento de erros. O modelo tradicional de sessões e transações de banco de dados fornece um recurso útil de manipulação de erros. Se algo der errado, é fácil jogar fora tudo o que aconteceu até agora na interação. Os dados da sessão são transitórios e podem ser descartados, ao custo de alguma irritação para o usuário, se no meio de algo complicado. Se ocorrer um erro no lado do banco de dados, você poderá reverter a transação.


As estruturas na memória do LMAX são persistentes nos eventos de entrada, portanto, se houver um erro, é importante não deixar essa memória em um estado inconsistente. No entanto, não há recurso de reversão automatizado. Como consequência, a equipe LMAX dá muita atenção para garantir que os eventos de entrada sejam totalmente válidos antes de fazer qualquer mutação do estado persistente na memória. Eles descobriram que o teste é uma ferramenta fundamental para eliminar esses tipos de problemas antes de entrar em produção.


Disruptores de entrada e saída.


Embora a lógica de negócios ocorra em um único encadeamento, há um número de tarefas a serem executadas antes que possamos invocar um método de objeto de negócios. A entrada original para processamento sai do fio na forma de uma mensagem, essa mensagem precisa ser desfeita em um formato conveniente para o Business Logic Processor usar. Event Sourcing relies on keeping a durable journal of all the input events, so each input message needs to be journaled onto a durable store. Finally the architecture relies on a cluster of Business Logic Processors, so we have to replicate the input messages across this cluster. Similarly on the output side, the output events need to be marshaled for transmission over the network.


Figure 2: The activities done by the input disruptor (using UML activity diagram notation)


The replicator and journaler involve IO and therefore are relatively slow. After all the central idea of Business Logic Processor is that it avoids doing any IO. Also these three tasks are relatively independent, all of them need to be done before the Business Logic Processor works on a message, but they can done in any order. So unlike with the Business Logic Processor, where each trade changes the market for subsequent trades, there is a natural fit for concurrency.


To handle this concurrency the LMAX team developed a special concurrency component, which they call a Disruptor [11].


The LMAX team have released the source code for the Disruptor with an open source licence.


At a crude level you can think of a Disruptor as a multicast graph of queues where producers put objects on it that are sent to all the consumers for parallel consumption through separate downstream queues. When you look inside you see that this network of queues is really a single data structure - a ring buffer. Each producer and consumer has a sequence counter to indicate which slot in the buffer it's currently working on. Each producer/consumer writes its own sequence counter but can read the others' sequence counters. This way the producer can read the consumers' counters to ensure the slot it wants to write in is available without any locks on the counters. Similarly a consumer can ensure it only processes messages once another consumer is done with it by watching the counters.


Figure 3: The input disruptor coordinates one producer and four consumers.


Output disruptors are similar but they only have two sequential consumers for marshaling and output.[12] Output events are organized into several topics, so that messages can be sent to only the receivers who are interested in them. Each topic has its own disruptor.


The disruptors I've described are used in a style with one producer and multiple consumers, but this isn't a limitation of the design of the disruptor. The disruptor can work with multiple producers too, in this case it still doesn't need locks.[13]


A benefit of the disruptor design is that it makes it easier for consumers to catch up quickly if they run into a problem and fall behind. If the unmarshaler has a problem when processing on slot 15 and returns when the receiver is on slot 31, it can read data from slots 16-30 in one batch to catch up. This batch read of the data from the disruptor makes it easier for lagging consumers to catch up quickly, thus reducing overall latency.


I've described things here, with one each of the journaler, replicator, and unmarshaler - this indeed is what LMAX does. But the design would allow multiple of these components to run. If you ran two journalers then one would take the even slots and the other journaler would take the odd slots. This allows further concurrency of these IO operations should this become necessary.


The ring buffers are large: 20 million slots for input buffer and 4 million slots for each of the output buffers. The sequence counters are 64bit long integers that increase monotonically even as the ring slots wrap.[14] The buffer is set to a size that's a power of two so the compiler can do an efficient modulus operation to map from the sequence counter number to the slot number. Like the rest of the system, the disruptors are bounced overnight. This bounce is mainly done to wipe memory so that there is less chance of an expensive garbage collection event during trading. (I also think it's a good habit to regularly restart, so that you rehearse how to do it for emergencies.)


The journaler's job is to store all the events in a durable form, so that they can be replayed should anything go wrong. LMAX does not use a database for this, just the file system. They stream the events onto the disk. In modern terms, mechanical disks are horribly slow for random access, but very fast for streaming - hence the tag-line "disk is the new tape".[15]


Earlier on I mentioned that LMAX runs multiple copies of its system in a cluster to support rapid failover. The replicator keeps these nodes in sync. All communication in LMAX uses IP multicasting, so clients don't need to know which IP address is the master node. Only the master node listens directly to input events and runs a replicator. The replicator broadcasts the input events to the slave nodes. Should the master node go down, it's lack of heartbeat will be noticed, another node becomes master, starts processing input events, and starts its replicator. Each node has its own input disruptor and thus has its own journal and does its own unmarshaling.


Even with IP multicasting, replication is still needed because IP messages can arrive in a different order on different nodes. The master node provides a deterministic sequence for the rest of the processing.


The unmarshaler turns the event data from the wire into a java object that can be used to invoke behavior on the Business Logic Processor. Therefore, unlike the other consumers, it needs to modify the data in the ring buffer so it can store this unmarshaled object. The rule here is that consumers are permitted to write to the ring buffer, but each writable field can only have one parallel consumer that's allowed to write to it. This preserves the principle of only having a single writer. [16]


Figure 4: The LMAX architecture with the disruptors expanded.


The disruptor is a general purpose component that can be used outside of the LMAX system. Usually financial companies are very secretive about their systems, keeping quiet even about items that aren't germane to their business. Not just has LMAX been open about its overall architecture, they have open-sourced the disruptor code - an act that makes me very happy. Not just will this allow other organizations to make use of the disruptor, it will also allow for more testing of its concurrency properties.


Queues and their lack of mechanical sympathy.


The LMAX architecture caught people's attention because it's a very different way of approaching a high performance system to what most people are thinking about. So far I've talked about how it works, but haven't delved too much into why it was developed this way. This tale is interesting in itself, because this architecture didn't just appear. It took a long time of trying more conventional alternatives, and realizing where they were flawed, before the team settled on this one.


Most business systems these days have a core architecture that relies on multiple active sessions coordinated through a transactional database. The LMAX team were familiar with this approach, and confident that it wouldn't work for LMAX. This assessment was founded in the experiences of Betfair - the parent company who set up LMAX. Betfair is a betting site that allows people to bet on sporting events. It handles very high volumes of traffic with a lot of contention - sports bets tend to burst around particular events. To make this work they have one of the hottest database installations around and have had to do many unnatural acts in order to make it work. Based on this experience they knew how difficult it was to maintain Betfair's performance and were sure that this kind of architecture would not work for the very low latency that a trading site would require. As a result they had to find a different approach.


Their initial approach was to follow what so many are saying these days - that to get high performance you need to use explicit concurrency. For this scenario, this means allowing orders to be processed by multiple threads in parallel. However, as is often the case with concurrency, the difficulty comes because these threads have to communicate with each other. Processing an order changes market conditions and these conditions need to be communicated.


The approach they explored early on was the Actor model and its cousin SEDA. The Actor model relies on independent, active objects with their own thread that communicate with each other via queues. Many people find this kind of concurrency model much easier to deal with than trying to do something based on locking primitives.


The team built a prototype exchange using the actor model and did performance tests on it. What they found was that the processors spent more time managing queues than doing the real logic of the application. Queue access was a bottleneck.


When pushing performance like this, it starts to become important to take account of the way modern hardware is constructed. The phrase Martin Thompson likes to use is "mechanical sympathy". The term comes from race car driving and it reflects the driver having an innate feel for the car, so they are able to feel how to get the best out of it. Many programmers, and I confess I fall into this camp, don't have much mechanical sympathy for how programming interacts with hardware. What's worse is that many programmers think they have mechanical sympathy, but it's built on notions of how hardware used to work that are now many years out of date.


One of the dominant factors with modern CPUs that affects latency, is how the CPU interacts with memory. These days going to main memory is a very slow operation in CPU-terms. CPUs have multiple levels of cache, each of which of is significantly faster. So to increase speed you want to get your code and data in those caches.


At one level, the actor model helps here. You can think of an actor as its own object that clusters code and data, which is a natural unit for caching. But actors need to communicate, which they do through queues - and the LMAX team observed that it's the queues that interfere with caching.


The explanation runs like this: in order to put some data on a queue, you need to write to that queue. Similarly, to take data off the queue, you need to write to the queue to perform the removal. This is write contention - more than one client may need to write to the same data structure. To deal with the write contention a queue often uses locks. But if a lock is used, that can cause a context switch to the kernel. When this happens the processor involved is likely to lose the data in its caches.


The conclusion they came to was that to get the best caching behavior, you need a design that has only one core writing to any memory location[17]. Multiple readers are fine, processors often use special high-speed links between their caches. But queues fail the one-writer principle.


This analysis led the LMAX team to a couple of conclusions. Firstly it led to the design of the disruptor, which determinedly follows the single-writer constraint. Secondly it led to idea of exploring the single-threaded business logic approach, asking the question of how fast a single thread can go if it's freed of concurrency management.


The essence of working on a single thread, is to ensure that you have one thread running on one core, the caches warm up, and as much memory access as possible goes to the caches rather than to main memory. This means that both the code and the working set of data needs to be as consistently accessed as possible. Also keeping small objects with code and data together allows them to be swapped between the caches as a unit, simplifying the cache management and again improving performance.


An essential part of the path to the LMAX architecture was the use of performance testing. The consideration and abandonment of an actor-based approach came from building and performance testing a prototype. Similarly much of the steps in improving the performance of the various components were enabled by performance tests. Mechanical sympathy is very valuable - it helps to form hypotheses about what improvements you can make, and guides you to forward steps rather than backward ones - but in the end it's the testing gives you the convincing evidence.


Performance testing in this style, however, is not a well-understood topic. Regularly the LMAX team stresses that coming up with meaningful performance tests is often harder than developing the production code. Again mechanical sympathy is important to developing the right tests. Testing a low level concurrency component is meaningless unless you take into account the caching behavior of the CPU.


One particular lesson is the importance of writing tests against null components to ensure the performance test is fast enough to really measure what real components are doing. Writing fast test code is no easier than writing fast production code and it's too easy to get false results because the test isn't as fast as the component it's trying to measure.


Should you use this architecture?


At first glance, this architecture appears to be for a very small niche. After all the driver that led to it was to be able to run lots of complex transactions with very low latency - most applications don't need to run at 6 million TPS.


But the thing that fascinates me about this application, is that they have ended up with a design which removes much of the programming complexity that plagues many software projects. The traditional model of concurrent sessions surrounding a transactional database isn't free of hassles. There's usually a non-trivial effort that goes into the relationship with the database. Object/relational mapping tools can help much of the pain of dealing with a database, but it doesn't deal with it all. Most performance tuning of enterprise applications involves futzing around with SQL.


These days, you can get more main memory into your servers than us old guys could get as disk space. More and more applications are quite capable of putting all their working set in main memory - thus eliminating a source of both complexity and sluggishness. Event Sourcing provides a way to solve the durability problem for an in-memory system, running everything in a single thread solves the concurrency issue. The LMAX experience suggests that as long as you need less than a few million TPS, you'll have enough performance headroom.


There is a considerable overlap here with the growing interest in CQRS. An event sourced, in-memory processor is a natural choice for the command-side of a CQRS system. (Although the LMAX team does not currently use CQRS.)


So what indicates you shouldn't go down this path? This is always a tricky questions for little-known techniques like this, since the profession needs more time to explore its boundaries. A starting point, however, is to think of the characteristics that encourage the architecture.


One characteristic is that this is a connected domain where processing one transaction always has the potential to change how following ones are processed. With transactions that are more independent of each other, there's less need to coordinate, so using separate processors running in parallel becomes more attractive.


LMAX concentrates on figuring the consequences of how events change the world. Many sites are more about taking an existing store of information and rendering various combinations of that information to as many eyeballs as they can find - eg think of any media site. Here the architectural challenge often centers on getting your caches right.


Another characteristic of LMAX is that this is a backend system, so it's reasonable to consider how applicable it would be for something acting in an interactive mode. Increasingly web application are helping us get used to server systems that react to requests, an aspect that does fit in well with this architecture. Where this architecture goes further than most such systems is its absolute use of asynchronous communications, resulting in the changes to the programming model that I outlined earlier.


These changes will take some getting used to for most teams. Most people tend to think of programming in synchronous terms and are not used to dealing with asynchrony. Yet it's long been true that asynchronous communication is an essential tool for responsiveness. It will be interesting to see if the wider use of asynchronous communication in the javascript world, with AJAX and node. js, will encourage more people to investigate this style. The LMAX team found that while it took a bit of time to adjust to asynchronous style, it soon became natural and often easier. In particular error handling was much easier to deal with under this approach.


The LMAX team certainly feels that the days of the coordinating transactional database are numbered. The fact that you can write software more easily using this kind of architecture and that it runs more quickly removes much of the justification for the traditional central database.


For my part, I find this a very exciting story. Much of my goal is to concentrate on software that models complex domains. An architecture like this provides good separation of concerns, allowing people to focus on Domain-Driven Design and keeping much of the platform complexity well separated. The close coupling between domain objects and databases has always been an irritation - approaches like this suggest a way out.


Trading Floor Architecture.


Available Languages.


Opções de download.


View with Adobe Reader on a variety of devices.


Índice.


Trading Floor Architecture.


Executive Overview.


Increased competition, higher market data volume, and new regulatory demands are some of the driving forces behind industry changes. Firms are trying to maintain their competitive edge by constantly changing their trading strategies and increasing the speed of trading.


A viable architecture has to include the latest technologies from both network and application domains. It has to be modular to provide a manageable path to evolve each component with minimal disruption to the overall system. Therefore the architecture proposed by this paper is based on a services framework. We examine services such as ultra-low latency messaging, latency monitoring, multicast, computing, storage, data and application virtualization, trading resiliency, trading mobility, and thin client.


The solution to the complex requirements of the next-generation trading platform must be built with a holistic mindset, crossing the boundaries of traditional silos like business and technology or applications and networking.


This document's main goal is to provide guidelines for building an ultra-low latency trading platform while optimizing the raw throughput and message rate for both market data and FIX trading orders.


To achieve this, we are proposing the following latency reduction technologies:


• High speed inter-connect—InfiniBand or 10 Gbps connectivity for the trading cluster.


• High-speed messaging bus.


• Application acceleration via RDMA without application re-code.


• Real-time latency monitoring and re-direction of trading traffic to the path with minimum latency.


Industry Trends and Challenges.


Next-generation trading architectures have to respond to increased demands for speed, volume, and efficiency. For example, the volume of options market data is expected to double after the introduction of options penny trading in 2007. There are also regulatory demands for best execution, which require handling price updates at rates that approach 1M msg/sec. for exchanges. They also require visibility into the freshness of the data and proof that the client got the best possible execution.


In the short term, speed of trading and innovation are key differentiators. An increasing number of trades are handled by algorithmic trading applications placed as close as possible to the trade execution venue. A challenge with these "black-box" trading engines is that they compound the volume increase by issuing orders only to cancel them and re-submit them. The cause of this behavior is lack of visibility into which venue offers best execution. The human trader is now a "financial engineer," a "quant" (quantitative analyst) with programming skills, who can adjust trading models on the fly. Firms develop new financial instruments like weather derivatives or cross-asset class trades and they need to deploy the new applications quickly and in a scalable fashion.


In the long term, competitive differentiation should come from analysis, not just knowledge. The star traders of tomorrow assume risk, achieve true client insight, and consistently beat the market (source IBM: www-935.ibm/services/us/imc/pdf/ge510-6270-trader. pdf).


Business resilience has been one main concern of trading firms since September 11, 2001. Solutions in this area range from redundant data centers situated in different geographies and connected to multiple trading venues to virtual trader solutions offering power traders most of the functionality of a trading floor in a remote location.


The financial services industry is one of the most demanding in terms of IT requirements. The industry is experiencing an architectural shift towards Services-Oriented Architecture (SOA), Web services, and virtualization of IT resources. SOA takes advantage of the increase in network speed to enable dynamic binding and virtualization of software components. This allows the creation of new applications without losing the investment in existing systems and infrastructure. The concept has the potential to revolutionize the way integration is done, enabling significant reductions in the complexity and cost of such integration (gigaspaces/download/MerrilLynchGigaSpacesWP. pdf).


Another trend is the consolidation of servers into data center server farms, while trader desks have only KVM extensions and ultra-thin clients (e. g., SunRay and HP blade solutions). High-speed Metro Area Networks enable market data to be multicast between different locations, enabling the virtualization of the trading floor.


High-Level Architecture.


Figure 1 depicts the high-level architecture of a trading environment. The ticker plant and the algorithmic trading engines are located in the high performance trading cluster in the firm's data center or at the exchange. The human traders are located in the end-user applications area.


Functionally there are two application components in the enterprise trading environment, publishers and subscribers. The messaging bus provides the communication path between publishers and subscribers.


There are two types of traffic specific to a trading environment:


• Market Data—Carries pricing information for financial instruments, news, and other value-added information such as analytics. It is unidirectional and very latency sensitive, typically delivered over UDP multicast. It is measured in updates/sec. and in Mbps. Market data flows from one or multiple external feeds, coming from market data providers like stock exchanges, data aggregators, and ECNs. Each provider has their own market data format. The data is received by feed handlers, specialized applications which normalize and clean the data and then send it to data consumers, such as pricing engines, algorithmic trading applications, or human traders. Sell-side firms also send the market data to their clients, buy-side firms such as mutual funds, hedge funds, and other asset managers. Some buy-side firms may opt to receive direct feeds from exchanges, reducing latency.


Figure 1 Trading Architecture for a Buy Side/Sell Side Firm.


There is no industry standard for market data formats. Each exchange has their proprietary format. Financial content providers such as Reuters and Bloomberg aggregate different sources of market data, normalize it, and add news or analytics. Examples of consolidated feeds are RDF (Reuters Data Feed), RWF (Reuters Wire Format), and Bloomberg Professional Services Data.


To deliver lower latency market data, both vendors have released real-time market data feeds which are less processed and have less analytics:


– Bloomberg B-Pipe—With B-Pipe, Bloomberg de-couples their market data feed from their distribution platform because a Bloomberg terminal is not required for get B-Pipe. Wombat and Reuters Feed Handlers have announced support for B-Pipe.


A firm may decide to receive feeds directly from an exchange to reduce latency. The gains in transmission speed can be between 150 milliseconds to 500 milliseconds. These feeds are more complex and more expensive and the firm has to build and maintain their own ticker plant (financetech/featured/showArticle. jhtml? articleID=60404306).


• Trading Orders—This type of traffic carries the actual trades. It is bi-directional and very latency sensitive. It is measured in messages/sec. and Mbps. The orders originate from a buy side or sell side firm and are sent to trading venues like an Exchange or ECN for execution. The most common format for order transport is FIX (Financial Information eXchange—fixprotocol/). The applications which handle FIX messages are called FIX engines and they interface with order management systems (OMS).


An optimization to FIX is called FAST (Fix Adapted for Streaming), which uses a compression schema to reduce message length and, in effect, reduce latency. FAST is targeted more to the delivery of market data and has the potential to become a standard. FAST can also be used as a compression schema for proprietary market data formats.


To reduce latency, firms may opt to establish Direct Market Access (DMA).


DMA is the automated process of routing a securities order directly to an execution venue, therefore avoiding the intervention by a third-party (towergroup/research/content/glossary. jsp? page=1&glossaryId=383). DMA requires a direct connection to the execution venue.


The messaging bus is middleware software from vendors such as Tibco, 29West, Reuters RMDS, or an open source platform such as AMQP. The messaging bus uses a reliable mechanism to deliver messages. The transport can be done over TCP/IP (TibcoEMS, 29West, RMDS, and AMQP) or UDP/multicast (TibcoRV, 29West, and RMDS). One important concept in message distribution is the "topic stream," which is a subset of market data defined by criteria such as ticker symbol, industry, or a certain basket of financial instruments. Subscribers join topic groups mapped to one or multiple sub-topics in order to receive only the relevant information. In the past, all traders received all market data. At the current volumes of traffic, this would be sub-optimal.


The network plays a critical role in the trading environment. Market data is carried to the trading floor where the human traders are located via a Campus or Metro Area high-speed network. High availability and low latency, as well as high throughput, are the most important metrics.


The high performance trading environment has most of its components in the Data Center server farm. To minimize latency, the algorithmic trading engines need to be located in the proximity of the feed handlers, FIX engines, and order management systems. An alternate deployment model has the algorithmic trading systems located at an exchange or a service provider with fast connectivity to multiple exchanges.


Deployment Models.


There are two deployment models for a high performance trading platform. Firms may chose to have a mix of the two:


• Data Center of the trading firm (Figure 2)—This is the traditional model, where a full-fledged trading platform is developed and maintained by the firm with communication links to all the trading venues. Latency varies with the speed of the links and the number of hops between the firm and the venues.


Figure 2 Traditional Deployment Model.


• Co-location at the trading venue (exchanges, financial service providers (FSP)) (Figure 3)


The trading firm deploys its automated trading platform as close as possible to the execution venues to minimize latency.


Figure 3 Hosted Deployment Model.


Services-Oriented Trading Architecture.


We are proposing a services-oriented framework for building the next-generation trading architecture. This approach provides a conceptual framework and an implementation path based on modularization and minimization of inter-dependencies.


This framework provides firms with a methodology to:


• Evaluate their current state in terms of services.


• Prioritize services based on their value to the business.


• Evolve the trading platform to the desired state using a modular approach.


The high performance trading architecture relies on the following services, as defined by the services architecture framework represented in Figure 4.


Figure 4 Service Architecture Framework for High Performance Trading.


Table 1 Service Descriptions and Technologies.


Ultra-low latency messaging.


Instrumentation—appliances, software agents, and router modules.


OS and I/O virtualization, Remote Direct Memory Access (RDMA), TCP Offload Engines (TOE)


Middleware which parallelizes application processing.


Middleware which speeds-up data access for applications, e. g., in-memory caching.


Hardware-assisted multicast replication through-out the network; multicast Layer 2 and Layer 3 optimizations.


Virtualization of storage hardware (VSANs), data replication, remote backup, and file virtualization.


Trading resilience and mobility.


Local and site load balancing and high availability campus networks.


Wide Area application services.


Acceleration of applications over a WAN connection for traders residing off-campus.


Thin client service.


De-coupling of the computing resources from the end-user facing terminals.


Ultra-Low Latency Messaging Service.


This service is provided by the messaging bus, which is a software system that solves the problem of connecting many-to-many applications. The system consists of:


• A set of pre-defined message schemas.


• A set of common command messages.


• A shared application infrastructure for sending the messages to recipients. The shared infrastructure can be based on a message broker or on a publish/subscribe model.


The key requirements for the next-generation messaging bus are (source 29West):


• Lowest possible latency (e. g., less than 100 microseconds)


• Stability under heavy load (e. g., more than 1.4 million msg/sec.)


• Control and flexibility (rate control and configurable transports)


There are efforts in the industry to standardize the messaging bus. Advanced Message Queueing Protocol (AMQP) is an example of an open standard championed by J. P. Morgan Chase and supported by a group of vendors such as Cisco, Envoy Technologies, Red Hat, TWIST Process Innovations, Iona, 29West, and iMatix. Two of the main goals are to provide a more simple path to inter-operability for applications written on different platforms and modularity so that the middleware can be easily evolved.


In very general terms, an AMQP server is analogous to an E-mail server with each exchange acting as a message transfer agent and each message queue as a mailbox. The bindings define the routing tables in each transfer agent. Publishers send messages to individual transfer agents, which then route the messages into mailboxes. Consumers take messages from mailboxes, which creates a powerful and flexible model that is simple (source: amqp/tikiwiki/tiki-index. php? page=OpenApproach#Why_AMQP_).


Latency Monitoring Service.


The main requirements for this service are:


• Sub-millisecond granularity of measurements.


• Near-real time visibility without adding latency to the trading traffic.


• Ability to differentiate application processing latency from network transit latency.


• Ability to handle high message rates.


• Provide a programmatic interface for trading applications to receive latency data, thus enabling algorithmic trading engines to adapt to changing conditions.


• Correlate network events with application events for troubleshooting purposes.


Latency can be defined as the time interval between when a trade order is sent and when the same order is acknowledged and acted upon by the receiving party.


Addressing the latency issue is a complex problem, requiring a holistic approach that identifies all sources of latency and applies different technologies at different layers of the system.


Figure 5 depicts the variety of components that can introduce latency at each layer of the OSI stack. It also maps each source of latency with a possible solution and a monitoring solution. This layered approach can give firms a more structured way of attacking the latency issue, whereby each component can be thought of as a service and treated consistently across the firm.


Maintaining an accurate measure of the dynamic state of this time interval across alternative routes and destinations can be of great assistance in tactical trading decisions. The ability to identify the exact location of delays, whether in the customer's edge network, the central processing hub, or the transaction application level, significantly determines the ability of service providers to meet their trading service-level agreements (SLAs). For buy-side and sell-side forms, as well as for market-data syndicators, the quick identification and removal of bottlenecks translates directly into enhanced trade opportunities and revenue.


Figure 5 Latency Management Architecture.


Cisco Low-Latency Monitoring Tools.


Traditional network monitoring tools operate with minutes or seconds granularity. Next-generation trading platforms, especially those supporting algorithmic trading, require latencies less than 5 ms and extremely low levels of packet loss. On a Gigabit LAN, a 100 ms microburst can cause 10,000 transactions to be lost or excessively delayed.


Cisco offers its customers a choice of tools to measure latency in a trading environment:


• Bandwidth Quality Manager (BQM) (OEM from Corvil)


• Cisco AON-based Financial Services Latency Monitoring Solution (FSMS)


Bandwidth Quality Manager.


Bandwidth Quality Manager (BQM) 4.0 is a next-generation network application performance management product that enables customers to monitor and provision their network for controlled levels of latency and loss performance. While BQM is not exclusively targeted at trading networks, its microsecond visibility combined with intelligent bandwidth provisioning features make it ideal for these demanding environments.


Cisco BQM 4.0 implements a broad set of patented and patent-pending traffic measurement and network analysis technologies that give the user unprecedented visibility and understanding of how to optimize the network for maximum application performance.


Cisco BQM is now supported on the product family of Cisco Application Deployment Engine (ADE). The Cisco ADE product family is the platform of choice for Cisco network management applications.


BQM Benefits.


Cisco BQM micro-visibility is the ability to detect, measure, and analyze latency, jitter, and loss inducing traffic events down to microsecond levels of granularity with per packet resolution. This enables Cisco BQM to detect and determine the impact of traffic events on network latency, jitter, and loss. Critical for trading environments is that BQM can support latency, loss, and jitter measurements one-way for both TCP and UDP (multicast) traffic. This means it reports seamlessly for both trading traffic and market data feeds.


BQM allows the user to specify a comprehensive set of thresholds (against microburst activity, latency, loss, jitter, utilization, etc.) on all interfaces. BQM then operates a background rolling packet capture. Whenever a threshold violation or other potential performance degradation event occurs, it triggers Cisco BQM to store the packet capture to disk for later analysis. This allows the user to examine in full detail both the application traffic that was affected by performance degradation ("the victims") and the traffic that caused the performance degradation ("the culprits"). This can significantly reduce the time spent diagnosing and resolving network performance issues.


BQM is also able to provide detailed bandwidth and quality of service (QoS) policy provisioning recommendations, which the user can directly apply to achieve desired network performance.


BQM Measurements Illustrated.


To understand the difference between some of the more conventional measurement techniques and the visibility provided by BQM, we can look at some comparison graphs. In the first set of graphs (Figure 6 and Figure 7), we see the difference between the latency measured by BQM's Passive Network Quality Monitor (PNQM) and the latency measured by injecting ping packets every 1 second into the traffic stream.


In Figure 6, we see the latency reported by 1-second ICMP ping packets for real network traffic (it is divided by 2 to give an estimate for the one-way delay). It shows the delay comfortably below about 5ms for almost all of the time.


Figure 6 Latency Reported by 1-Second ICMP Ping Packets for Real Network Traffic.


In Figure 7, we see the latency reported by PNQM for the same traffic at the same time. Here we see that by measuring the one-way latency of the actual application packets, we get a radically different picture. Here the latency is seen to be hovering around 20 ms, with occasional bursts far higher. The explanation is that because ping is sending packets only every second, it is completely missing most of the application traffic latency. In fact, ping results typically only indicate round trip propagation delay rather than realistic application latency across the network.


Figure 7 Latency Reported by PNQM for Real Network Traffic.


In the second example (Figure 8), we see the difference in reported link load or saturation levels between a 5-minute average view and a 5 ms microburst view (BQM can report on microbursts down to about 10-100 nanosecond accuracy). The green line shows the average utilization at 5-minute averages to be low, maybe up to 5 Mbits/s. The dark blue plot shows the 5ms microburst activity reaching between 75 Mbits/s and 100 Mbits/s, the LAN speed effectively. BQM shows this level of granularity for all applications and it also gives clear provisioning rules to enable the user to control or neutralize these microbursts.


Figure 8 Difference in Reported Link Load Between a 5-Minute Average View and a 5 ms Microburst View.


BQM Deployment in the Trading Network.


Figure 9 shows a typical BQM deployment in a trading network.


Figure 9 Typical BQM Deployment in a Trading Network.


BQM can then be used to answer these types of questions:


• Are any of my Gigabit LAN core links saturated for more than X milliseconds? Is this causing loss? Which links would most benefit from an upgrade to Etherchannel or 10 Gigabit speeds?


• What application traffic is causing the saturation of my 1 Gigabit links?


• Is any of the market data experiencing end-to-end loss?


• How much additional latency does the failover data center experience? Is this link sized correctly to deal with microbursts?


• Are my traders getting low latency updates from the market data distribution layer? Are they seeing any delays greater than X milliseconds?


Being able to answer these questions simply and effectively saves time and money in running the trading network.


BQM is an essential tool for gaining visibility in market data and trading environments. It provides granular end-to-end latency measurements in complex infrastructures that experience high-volume data movement. Effectively detecting microbursts in sub-millisecond levels and receiving expert analysis on a particular event is invaluable to trading floor architects. Smart bandwidth provisioning recommendations, such as sizing and what-if analysis, provide greater agility to respond to volatile market conditions. As the explosion of algorithmic trading and increasing message rates continues, BQM, combined with its QoS tool, provides the capability of implementing QoS policies that can protect critical trading applications.


Cisco Financial Services Latency Monitoring Solution.


Cisco and Trading Metrics have collaborated on latency monitoring solutions for FIX order flow and market data monitoring. Cisco AON technology is the foundation for a new class of network-embedded products and solutions that help merge intelligent networks with application infrastructure, based on either service-oriented or traditional architectures. Trading Metrics is a leading provider of analytics software for network infrastructure and application latency monitoring purposes (tradingmetrics/).


The Cisco AON Financial Services Latency Monitoring Solution (FSMS) correlated two kinds of events at the point of observation:


• Network events correlated directly with coincident application message handling.


• Trade order flow and matching market update events.


Using time stamps asserted at the point of capture in the network, real-time analysis of these correlated data streams permits precise identification of bottlenecks across the infrastructure while a trade is being executed or market data is being distributed. By monitoring and measuring latency early in the cycle, financial companies can make better decisions about which network service—and which intermediary, market, or counterparty—to select for routing trade orders. Likewise, this knowledge allows more streamlined access to updated market data (stock quotes, economic news, etc.), which is an important basis for initiating, withdrawing from, or pursuing market opportunities.


The components of the solution are:


• AON hardware in three form factors:


– AON Network Module for Cisco 2600/2800/3700/3800 routers.


– AON Blade for the Cisco Catalyst 6500 series.


– AON 8340 Appliance.


• Trading Metrics M&A 2.0 software, which provides the monitoring and alerting application, displays latency graphs on a dashboard, and issues alerts when slowdowns occur (tradingmetrics/TM_brochure. pdf).


Figure 10 AON-Based FIX Latency Monitoring.


Cisco IP SLA.


Cisco IP SLA is an embedded network management tool in Cisco IOS which allows routers and switches to generate synthetic traffic streams which can be measured for latency, jitter, packet loss, and other criteria (cisco/go/ipsla).


Two key concepts are the source of the generated traffic and the target. Both of these run an IP SLA "responder," which has the responsibility to timestamp the control traffic before it is sourced and returned by the target (for a round trip measurement). Various traffic types can be sourced within IP SLA and they are aimed at different metrics and target different services and applications. The UDP jitter operation is used to measure one-way and round-trip delay and report variations. As the traffic is time stamped on both sending and target devices using the responder capability, the round trip delay is characterized as the delta between the two timestamps.


A new feature was introduced in IOS 12.3(14)T, IP SLA Sub Millisecond Reporting, which allows for timestamps to be displayed with a resolution in microseconds, thus providing a level of granularity not previously available. This new feature has now made IP SLA relevant to campus networks where network latency is typically in the range of 300-800 microseconds and the ability to detect trends and spikes (brief trends) based on microsecond granularity counters is a requirement for customers engaged in time-sensitive electronic trading environments.


As a result, IP SLA is now being considered by significant numbers of financial organizations as they are all faced with requirements to:


• Report baseline latency to their users.


• Trend baseline latency over time.


• Respond quickly to traffic bursts that cause changes in the reported latency.


Sub-millisecond reporting is necessary for these customers, since many campus and backbones are currently delivering under a second of latency across several switch hops. Electronic trading environments have generally worked to eliminate or minimize all areas of device and network latency to deliver rapid order fulfillment to the business. Reporting that network response times are "just under one millisecond" is no longer sufficient; the granularity of latency measurements reported across a network segment or backbone need to be closer to 300-800 micro-seconds with a degree of resolution of 100 ì segundos.


IP SLA recently added support for IP multicast test streams, which can measure market data latency.


A typical network topology is shown in Figure 11 with the IP SLA shadow routers, sources, and responders.


Figure 11 IP SLA Deployment.


Computing Services.


Computing services cover a wide range of technologies with the goal of eliminating memory and CPU bottlenecks created by the processing of network packets. Trading applications consume high volumes of market data and the servers have to dedicate resources to processing network traffic instead of application processing.


• Transport processing—At high speeds, network packet processing can consume a significant amount of server CPU cycles and memory. An established rule of thumb states that 1Gbps of network bandwidth requires 1 GHz of processor capacity (source Intel white paper on I/O acceleration intel/technology/ioacceleration/306517.pdf).


• Intermediate buffer copying—In a conventional network stack implementation, data needs to be copied by the CPU between network buffers and application buffers. This overhead is worsened by the fact that memory speeds have not kept up with increases in CPU speeds. For example, processors like the Intel Xeon are approaching 4 GHz, while RAM chips hover around 400MHz (for DDR 3200 memory) (source Intel intel/technology/ioacceleration/306517.pdf).


• Context switching—Every time an individual packet needs to be processed, the CPU performs a context switch from application context to network traffic context. This overhead could be reduced if the switch would occur only when the whole application buffer is complete.


Figure 12 Sources of Overhead in Data Center Servers.


• TCP Offload Engine (TOE)—Offloads transport processor cycles to the NIC. Moves TCP/IP protocol stack buffer copies from system memory to NIC memory.


• Remote Direct Memory Access (RDMA)—Enables a network adapter to transfer data directly from application to application without involving the operating system. Eliminates intermediate and application buffer copies (memory bandwidth consumption).


• Kernel bypass — Direct user-level access to hardware. Dramatically reduces application context switches.


Figure 13 RDMA and Kernel Bypass.


InfiniBand is a point-to-point (switched fabric) bidirectional serial communication link which implements RDMA, among other features. Cisco offers an InfiniBand switch, the Server Fabric Switch (SFS): cisco/application/pdf/en/us/guest/netsol/ns500/c643/cdccont_0900aecd804c35cb. pdf.


Figure 14 Typical SFS Deployment.


Trading applications benefit from the reduction in latency and latency variability, as proved by a test performed with the Cisco SFS and Wombat Feed Handlers by Stac Research:


Application Virtualization Service.


De-coupling the application from the underlying OS and server hardware enables them to run as network services. One application can be run in parallel on multiple servers, or multiple applications can be run on the same server, as the best resource allocation dictates. This decoupling enables better load balancing and disaster recovery for business continuance strategies. The process of re-allocating computing resources to an application is dynamic. Using an application virtualization system like Data Synapse's GridServer, applications can migrate, using pre-configured policies, to under-utilized servers in a supply-matches-demand process (networkworld/supp/2005/ndc1/022105virtual. html? page=2).


There are many business advantages for financial firms who adopt application virtualization:


• Faster time to market for new products and services.


• Faster integration of firms following merger and acquisition activity.


• Increased application availability.


• Better workload distribution, which creates more "head room" for processing spikes in trading volume.


• Operational efficiency and control.


• Reduction in IT complexity.


Currently, application virtualization is not used in the trading front-office. One use-case is risk modeling, like Monte Carlo simulations. As the technology evolves, it is conceivable that some the trading platforms will adopt it.


Data Virtualization Service.


To effectively share resources across distributed enterprise applications, firms must be able to leverage data across multiple sources in real-time while ensuring data integrity. With solutions from data virtualization software vendors such as Gemstone or Tangosol (now Oracle), financial firms can access heterogeneous sources of data as a single system image that enables connectivity between business processes and unrestrained application access to distributed caching. The net result is that all users have instant access to these data resources across a distributed network (gridtoday/03/0210/101061.html).


This is called a data grid and is the first step in the process of creating what Gartner calls Extreme Transaction Processing (XTP) (gartner/DisplayDocument? ref=g_search&id=500947). Technologies such as data and applications virtualization enable financial firms to perform real-time complex analytics, event-driven applications, and dynamic resource allocation.


One example of data virtualization in action is a global order book application. An order book is the repository of active orders that is published by the exchange or other market makers. A global order book aggregates orders from around the world from markets that operate independently. The biggest challenge for the application is scalability over WAN connectivity because it has to maintain state. Today's data grids are localized in data centers connected by Metro Area Networks (MAN). This is mainly because the applications themselves have limits—they have been developed without the WAN in mind.


Figure 15 GemStone GemFire Distributed Caching.


Before data virtualization, applications used database clustering for failover and scalability. This solution is limited by the performance of the underlying database. Failover is slower because the data is committed to disc. With data grids, the data which is part of the active state is cached in memory, which reduces drastically the failover time. Scaling the data grid means just adding more distributed resources, providing a more deterministic performance compared to a database cluster.


Multicast Service.


Market data delivery is a perfect example of an application that needs to deliver the same data stream to hundreds and potentially thousands of end users. Market data services have been implemented with TCP or UDP broadcast as the network layer, but those implementations have limited scalability. Using TCP requires a separate socket and sliding window on the server for each recipient. UDP broadcast requires a separate copy of the stream for each destination subnet. Both of these methods exhaust the resources of the servers and the network. The server side must transmit and service each of the streams individually, which requires larger and larger server farms. On the network side, the required bandwidth for the application increases in a linear fashion. For example, to send a 1 Mbps stream to 1000recipients using TCP requires 1 Gbps of bandwidth.


IP multicast is the only way to scale market data delivery. To deliver a 1 Mbps stream to 1000 recipients, IP multicast would require 1 Mbps. The stream can be delivered by as few as two servers—one primary and one backup for redundancy.


There are two main phases of market data delivery to the end user. In the first phase, the data stream must be brought from the exchange into the brokerage's network. Typically the feeds are terminated in a data center on the customer premise. The feeds are then processed by a feed handler, which may normalize the data stream into a common format and then republish into the application messaging servers in the data center.


The second phase involves injecting the data stream into the application messaging bus which feeds the core infrastructure of the trading applications. The large brokerage houses have thousands of applications that use the market data streams for various purposes, such as live trades, long term trending, arbitrage, etc. Many of these applications listen to the feeds and then republish their own analytical and derivative information. For example, a brokerage may compare the prices of CSCO to the option prices of CSCO on another exchange and then publish ratings which a different application may monitor to determine how much they are out of synchronization.


Figure 16 Market Data Distribution Players.


The delivery of these data streams is typically over a reliable multicast transport protocol, traditionally Tibco Rendezvous. Tibco RV operates in a publish and subscribe environment. Each financial instrument is given a subject name, such as CSCO. last. Each application server can request the individual instruments of interest by their subject name and receive just a that subset of the information. This is called subject-based forwarding or filtering. Subject-based filtering is patented by Tibco.


A distinction should be made between the first and second phases of market data delivery. The delivery of market data from the exchange to the brokerage is mostly a one-to-many application. The only exception to the unidirectional nature of market data may be retransmission requests, which are usually sent using unicast. The trading applications, however, are definitely many-to-many applications and may interact with the exchanges to place orders.


Figure 17 Market Data Architecture.


Design Issues.


Number of Groups/Channels to Use.


Many application developers consider using thousand of multicast groups to give them the ability to divide up products or instruments into small buckets. Normally these applications send many small messages as part of their information bus. Usually several messages are sent in each packet that are received by many users. Sending fewer messages in each packet increases the overhead necessary for each message.


In the extreme case, sending only one message in each packet quickly reaches the point of diminishing returns—there is more overhead sent than actual data. Application developers must find a reasonable compromise between the number of groups and breaking up their products into logical buckets.


Consider, for example, the Nasdaq Quotation Dissemination Service (NQDS). The instruments are broken up alphabetically:


Another example is the Nasdaq Totalview service, broken up this way:


This approach allows for straight forward network/application management, but does not necessarily allow for optimized bandwidth utilization for most users. A user of NQDS that is interested in technology stocks, and would like to subscribe to just CSCO and INTL, would have to pull down all the data for the first two groups of NQDS. Understanding the way users pull down the data and then organize it into appropriate logical groups optimizes the bandwidth for each user.


In many market data applications, optimizing the data organization would be of limited value. Typically customers bring in all data into a few machines and filter the instruments. Using more groups is just more overhead for the stack and does not help the customers conserve bandwidth. Another approach might be to keep the groups down to a minimum level and use UDP port numbers to further differentiate if necessary. The other extreme would be to use just one multicast group for the entire application and then have the end user filter the data. In some situations this may be sufficient.


Intermittent Sources.


A common issue with market data applications are servers that send data to a multicast group and then go silent for more than 3.5 minutes. These intermittent sources may cause trashing of state on the network and can introduce packet loss during the window of time when soft state and then hardware shorts are being created.


PIM-Bidir or PIM-SSM.


The first and best solution for intermittent sources is to use PIM-Bidir for many-to-many applications and PIM-SSM for one-to-many applications.


Both of these optimizations of the PIM protocol do not have any data-driven events in creating forwarding state. That means that as long as the receivers are subscribed to the streams, the network has the forwarding state created in the hardware switching path.


Intermittent sources are not an issue with PIM-Bidir and PIM-SSM.


Null Packets.


In PIM-SM environments a common method to make sure forwarding state is created is to send a burst of null packets to the multicast group before the actual data stream. The application must efficiently ignore these null data packets to ensure it does not affect performance. The sources must only send the burst of packets if they have been silent for more than 3 minutes. A good practice is to send the burst if the source is silent for more than a minute. Many financials send out an initial burst of traffic in the morning and then all well-behaved sources do not have problems.


Periodic Keepalives or Heartbeats.


An alternative approach for PIM-SM environments is for sources to send periodic heartbeat messages to the multicast groups. This is a similar approach to the null packets, but the packets can be sent on a regular timer so that the forwarding state never expires.


S, G Expiry Timer.


Finally, Cisco has made a modification to the operation of the S, G expiry timer in IOS. There is now a CLI knob to allow the state for a S, G to stay alive for hours without any traffic being sent. The (S, G) expiry timer is configurable. This approach should be considered a workaround until PIM-Bidir or PIM-SSM is deployed or the application is fixed.


RTCP Feedback.


A common issue with real time voice and video applications that use RTP is the use of RTCP feedback traffic. Unnecessary use of the feedback option can create excessive multicast state in the network. If the RTCP traffic is not required by the application it should be avoided.


Fast Producers and Slow Consumers.


Today many servers providing market data are attached at Gigabit speeds, while the receivers are attached at different speeds, usually 100Mbps. This creates the potential for receivers to drop packets and request re-transmissions, which creates more traffic that the slowest consumers cannot handle, continuing the vicious circle.


The solution needs to be some type of access control in the application that limits the amount of data that one host can request. QoS and other network functions can mitigate the problem, but ultimately the subscriptions need to be managed in the application.


Tibco Heartbeats.


TibcoRV has had the ability to use IP multicast for the heartbeat between the TICs for many years. However, there are some brokerage houses that are still using very old versions of TibcoRV that use UDP broadcast support for the resiliency. This limitation is often cited as a reason to maintain a Layer 2 infrastructure between TICs located in different data centers. These older versions of TibcoRV should be phased out in favor of the IP multicast supported versions.


Multicast Forwarding Options.


PIM Sparse Mode.


The standard IP multicast forwarding protocol used today for market data delivery is PIM Sparse Mode. It is supported on all Cisco routers and switches and is well understood. PIM-SM can be used in all the network components from the exchange, FSP, and brokerage.


There are, however, some long-standing issues and unnecessary complexity associated with a PIM-SM deployment that could be avoided by using PIM-Bidir and PIM-SSM. These are covered in the next sections.


The main components of the PIM-SM implementation are:


• PIM Sparse Mode v2.


• Shared Tree (spt-threshold infinity)


A design option in the brokerage or in the exchange.


Details of Anycast RP can be found in:


The classic high availability design for Tibco in the brokerage network is documented in:


Bidirectional PIM.


PIM-Bidir is an optimization of PIM Sparse Mode for many-to-many applications. It has several key advantages over a PIM-SM deployment:


• Better support for intermittent sources.


• No data-triggered events.


One of the weaknesses of PIM-SM is that the network continually needs to react to active data flows. This can cause non-deterministic behavior that may be hard to troubleshoot. PIM-Bidir has the following major protocol differences over PIM-SM:


– No source registration.


Source traffic is automatically sent to the RP and then down to the interested receivers. There is no unicast encapsulation, PIM joins from the RP to the first hop router and then registration stop messages.


All PIM-Bidir traffic is forwarded on a *,G forwarding entry. The router does not have to monitor the traffic flow on a *,G and then send joins when the traffic passes a threshold.


– No need for an actual RP.


The RP does not have an actual protocol function in PIM-Bidir. The RP acts as a routing vector in which all the traffic converges. The RP can be configured as an address that is not assigned to any particular device. This is called a Phantom RP.


– No need for MSDP.


MSDP provides source information between RPs in a PIM-SM network. PIM-Bidir does not use the active source information for any forwarding decisions and therefore MSDP is not required.


Bidirectional PIM is ideally suited for the brokerage network in the data center of the exchange. In this environment there are many sources sending to a relatively few set of groups in a many-to-many traffic pattern.


The key components of the PIM-Bidir implementation are:


Further details about Phantom RP and basic PIM-Bidir design are documented in:


Source Specific Multicast.


PIM-SSM is an optimization of PIM Sparse Mode for one-to-many applications. In certain environments it can offer several distinct advantages over PIM-SM. Like PIM-Bidir, PIM-SSM does not rely on any data-triggered events. Furthermore, PIM-SSM does not require an RP at all—there is no such concept in PIM-SSM. The forwarding information in the network is completely controlled by the interest of the receivers.


Source Specific Multicast is ideally suited for market data delivery in the financial service provider. The FSP can receive the feeds from the exchanges and then route them to the end of their network.


Many FSPs are also implementing MPLS and Multicast VPNs in their core. PIM-SSM is the preferred method for transporting traffic in VRFs.


When PIM-SSM is deployed all the way to the end user, the receiver indicates his interest in a particular S, G with IGMPv3. Even though IGMPv3 was defined by RFC 2236 back in October, 2002, it still has not been implemented by all edge devices. This creates a challenge for deploying an end-to-end PIM-SSM service. A transitional solution has been developed by Cisco to enable an edge device that supports IGMPv2 to participate in an PIM-SSM service. This feature is called SSM Mapping and is documented in:


Storage Services.


The service provides storage capabilities into the market data and trading environments. Trading applications access backend storage to connect to different databases and other repositories consisting of portfolios, trade settlements, compliance data, management applications, Enterprise Service Bus (ESB), and other critical applications where reliability and security is critical to the success of the business. The main requirements for the service are:


Storage virtualization is an enabling technology that simplifies management of complex infrastructures, enables non-disruptive operations, and facilitates critical elements of a proactive information lifecycle management (ILM) strategy. EMC Invista running on the Cisco MDS 9000 enables heterogeneous storage pooling and dynamic storage provisioning, allowing allocation of any storage to any application. High availability is increased with seamless data migration. Appropriate class of storage is allocated to point-in-time copies (clones). Storage virtualization is also leveraged through the use of Virtual Storage Area Networks (VSANs), which enable the consolidation of multiple isolated SANs onto a single physical SAN infrastructure, while still partitioning them as completely separate logical entities. VSANs provide all the security and fabric services of traditional SANs, yet give organizations the flexibility to easily move resources from one VSAN to another. This results in increased disk and network utilization while driving down the cost of management. Integrated Inter VSAN Routing (IVR) enables sharing of common resources across VSANs.


Figure 18 High Performance Computing Storage.


Replication of data to a secondary and tertiary data center is crucial for business continuance. Replication offsite over Fiber Channel over IP (FCIP) coupled with write acceleration and tape acceleration provides improved performance over long distance. Continuous Data Replication (CDP) is another mechanism which is gaining popularity in the industry. It refers to backup of computer data by automatically saving a copy of every change made to that data, essentially capturing every version of the data that the user saves. It allows the user or administrator to restore data to any point in time. Solutions from EMC and Incipient utilize the SANTap protocol on the Storage Services Module (SSM) in the MDS platform to provide CDP functionality. The SSM uses the SANTap service to intercept and redirect a copy of a write between a given initiator and target. The appliance does not reside in the data path—it is completely passive. The CDP solutions typically leverage a history journal that tracks all changes and bookmarks that identify application-specific events. This ensures that data at any point in time is fully self-consistent and is recoverable instantly in the event of a site failure.


Backup procedure reliability and performance are extremely important when storing critical financial data to a SAN. The use of expensive media servers to move data from disk to tape devices can be cumbersome. Network-accelerated serverless backup (NASB) helps you back up increased amounts of data in shorter backup time frames by shifting the data movement from multiple backup servers to Cisco MDS 9000 Series multilayer switches. This technology decreases impact on application servers because the MDS offloads the application and backup servers. It also reduces the number of backup and media servers required, thus reducing CAPEX and OPEX. The flexibility of the backup environment increases because storage and tape drives can reside anywhere on the SAN.


Trading Resilience and Mobility.


The main requirements for this service are to provide the virtual trader:


• Fully scalable and redundant campus trading environment.


• Resilient server load balancing and high availability in analytic server farms.


• Global site load balancing that provide the capability to continue participating in the market venues of closest proximity.


A highly-available campus environment is capable of sustaining multiple failures (i. e., links, switches, modules, etc.), which provides non-disruptive access to trading systems for traders and market data feeds. Fine-tuned routing protocol timers, in conjunction with mechanisms such as NSF/SSO, provide subsecond recovery from any failure.


The high-speed interconnect between data centers can be DWDM/dark fiber, which provides business continuance in case of a site failure. Each site is 100km-200km apart, allowing synchronous data replication. Usually the distance for synchronous data replication is 100km, but with Read/Write Acceleration it can stretch to 200km. A tertiary data center can be greater than 200km away, which would replicate data in an asynchronous fashion.


Figure 19 Trading Resilience.


A robust server load balancing solution is required for order routing, algorithmic trading, risk analysis, and other services to offer continuous access to clients regardless of a server failure. Multiple servers encompass a "farm" and these hosts can added/removed without disruption since they reside behind a virtual IP (VIP) address which is announced in the network.


A global site load balancing solution provides remote traders the resiliency to access trading environments which are closer to their location. This minimizes latency for execution times since requests are always routed to the nearest venue.


Figure 20 Virtualization of Trading Environment.


A trading environment can be virtualized to provide segmentation and resiliency in complex architectures. Figure 20 illustrates a high-level topology depicting multiple market data feeds entering the environment, whereby each vendor is assigned its own Virtual Routing and Forwarding (VRF) instance. The market data is transferred to a high-speed InfiniBand low-latency compute fabric where feed handlers, order routing systems, and algorithmic trading systems reside. All storage is accessed via a SAN and is also virtualized with VSANs, allowing further security and segmentation. The normalized data from the compute fabric is transferred to the campus trading environment where the trading desks reside.


Wide Area Application Services.


This service provides application acceleration and optimization capabilities for traders who are located outside of the core trading floor facility/data center and working from a remote office. To consolidate servers and increase security in remote offices, file servers, NAS filers, storage arrays, and tape drives are moved to a corporate data center to increase security and regulatory compliance and facilitate centralized storage and archival management. As the traditional trading floor is becoming more virtual, wide area application services technology is being utilized to provide a "LAN-like" experience to remote traders when they access resources at the corporate site. Traders often utilize Microsoft Office applications, especially Excel in addition to Sharepoint and Exchange. Excel is used heavily for modeling and permutations where sometime only small portions of the file are changed. CIFS protocol is notoriously known to be "chatty," where several message normally traverse the WAN for a simple file operation and it is addressed by Wide Area Application Service (WAAS) technology. Bloomberg and Reuters applications are also very popular financial tools which access a centralized SAN or NAS filer to retrieve critical data which is fused together before represented to a trader's screen.


Figure 21 Wide Area Optimization.


A pair of Wide Area Application Engines (WAEs) that reside in the remote office and the data center provide local object caching to increase application performance. The remote office WAEs can be a module in the ISR router or a stand-alone appliance. The data center WAE devices are load balanced behind an Application Control Engine module installed in a pair of Catalyst 6500 series switches at the aggregation layer. The WAE appliance farm is represented by a virtual IP address. The local router in each site utilizes Web Cache Communication Protocol version 2 (WCCP v2) to redirect traffic to the WAE that intercepts the traffic and determines if there is a cache hit or miss. The content is served locally from the engine if it resides in cache; otherwise the request is sent across the WAN the initial time to retrieve the object. This methodology optimizes the trader experience by removing application latency and shielding the individual from any congestion in the WAN.


WAAS uses the following technologies to provide application acceleration:


• Data Redundancy Elimination (DRE) is an advanced form of network compression which allows the WAE to maintain a history of previously-seen TCP message traffic for the purposes of reducing redundancy found in network traffic. This combined with the Lempel-Ziv (LZ) compression algorithm reduces the number of redundant packets that traverse the WAN, which improves application transaction performance and conserves bandwidth.


• Transport Flow Optimization (TFO) employs a robust TCP proxy to safely optimize TCP at the WAE device by applying TCP-compliant optimizations to shield the clients and servers from poor TCP behavior because of WAN conditions. By running a TCP proxy between the devices and leveraging an optimized TCP stack between the devices, many of the problems that occur in the WAN are completely blocked from propagating back to trader desktops. The traders experience LAN-like TCP response times and behavior because the WAE is terminating TCP locally. TFO improves reliability and throughput through increases in TCP window scaling and sizing enhancements in addition to superior congestion management.


Thin Client Service.


This service provides a "thin" advanced trading desktop which delivers significant advantages to demanding trading floor environments requiring continuous growth in compute power. As financial institutions race to provide the best trade executions for their clients, traders are utilizing several simultaneous critical applications that facilitate complex transactions. It is not uncommon to find three or more workstations and monitors at a trader's desk which provide visibility into market liquidity, trading venues, news, analysis of complex portfolio simulations, and other financial tools. In addition, market dynamics continue to evolve with Direct Market Access (DMA), ECNs, alternative trading volumes, and upcoming regulation changes with Regulation National Market System (RegNMS) in the US and Markets in Financial Instruments Directive (MiFID) in Europe. At the same time, business seeks greater control, improved ROI, and additional flexibility, which creates greater demands on trading floor infrastructures.


Traders no longer require multiple workstations at their desk. Thin clients consist of keyboard, mouse, and multi-displays which provide a total trader desktop solution without compromising security. Hewlett Packard, Citrix, Desktone, Wyse, and other vendors provide thin client solutions to capitalize on the virtual desktop paradigm. Thin clients de-couple the user-facing hardware from the processing hardware, thus enabling IT to grow the processing power without changing anything on the end user side. The workstation computing power is stored in the data center on blade workstations, which provide greater scalability, increased data security, improved business continuance across multiple sites, and reduction in OPEX by removing the need to manage individual workstations on the trading floor. One blade workstation can be dedicated to a trader or shared among multiple traders depending on the requirements for computer power.


The "thin client" solution is optimized to work in a campus LAN environment, but can also extend the benefits to traders in remote locations. Latency is always a concern when there is a WAN interconnecting the blade workstation and thin client devices. The network connection needs to be sized accordingly so traffic is not dropped if saturation points exist in the WAN topology. WAN Quality of Service (QoS) should prioritize sensitive traffic. There are some guidelines which should be followed to allow for an optimized user experience. A typical highly-interactive desktop experience requires a client-to-blade round trip latency of <20ms for a 2Kb packet size. There may be a slight lag in display if network latency is between 20ms to 40ms. A typical trader desk with a four multi-display terminal requires 2-3Mbps bandwidth consumption with seamless communication with blade workstation(s) in the data center. Streaming video (800x600 at 24fps/full color) requires 9 Mbps bandwidth usage.


Figure 22 Thin Client Architecture.


Management of a large thin client environment is simplified since a centralized IT staff manages all of the blade workstations dispersed across multiple data centers. A trader is redirected to the most available environment in the enterprise in the event of a particular site failure. High availability is a key concern in critical financial environments and the Blade Workstation design provides rapid provisioning of another blade workstation in the data center. This resiliency provides greater uptime, increases in productivity, and OpEx reduction.


Low Latency Trading Architecture at LMAX Exchange Like.


Se o comportamento persistir, por favor entre em contato.


NOTE: QCon New York - the 7th international software development conference - Jun 25-29, 2018. 100+ expert practitioner speakers, 1300+ attendees, 15 tracks to cover topics driving the evolution of software development today. Get more details or register now!


Sam Adams presents an overview of the architecture LMAX Exchange uses to deliver over $2 trillion a year through their platform, and shares their experience of how taking a scientific approach to testing and tuning software has helped them to build a high-availability stateful system.


Sam Adams is currently the Head of Software Engineering at LMAX Exchange. He has had an eclectic career to date: variously modeling the metabolism of drugs and food additives, creating tools to manage and mine scientific data, and now building a high performance exchange at one of the UK's fastest growing Tech Companies.


Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. Uma conferência orientada por profissionais, a QCon é projetada para líderes de equipes técnicas, arquitetos, diretores de engenharia e gerentes de projetos que influenciam a inovação em suas equipes.

Комментарии

Популярные сообщения