node-markdown-spellcheck icon indicating copy to clipboard operation
node-markdown-spellcheck copied to clipboard

Spellchecking broken after git merge

Open prd001 opened this issue 8 years ago • 0 comments

After merging a file in git, spellchecking started to display things, that have not changed at all. I have taken the 2 files, before merge and after merge, put them into the same folder and these are the results:

# mdspell Running_OpenMPI3.md -r
    Running_OpenMPI3.md
        1 | Running OpenMPI
        4 | OpenMPI program execution
      204 | Some options have changed in OpenMPI version 1.8.
      204 | ve changed in OpenMPI version 1.8.
      206 |  |version 1.6.5 |version 1.8.1 |
      206 |  |version 1.6.5 |version 1.8.1 |
      211 |  |-bysocket |--map-by socket |
      212 |  |-bycore |--map-by core |
      213 |  |-pernode |--map-by ppr:1:node |

>> 9 spelling errors found in 1 file
# mdspell Running_OpenMPI2.md -r
    Running_OpenMPI2.md
        1 | # Running OpenMPI
        3 | ## OpenMPI Program Execution
        5 | The OpenMPI programs may be executed only
        5 |  queue. On the cluster, the **OpenMPI 1.8.6** is OpenMPI based MPI
        5 | On the cluster, the **OpenMPI 1.8.6** is OpenMPI based MPI implem
        5 | ter, the **OpenMPI 1.8.6** is OpenMPI based MPI implementation.
        5 | nMPI 1.8.6** is OpenMPI based MPI implementation.
        9 | Use the mpiexec to run the OpenMPI code.
        9 | Use the mpiexec to run the OpenMPI code.
       29 | run hybrid code with just one MPI and 24 OpenMP tasks per node)
       29 | id code with just one MPI and 24 OpenMP tasks per node). In no
       29 | code with just one MPI and 24 OpenMP tasks per node). In normal MP
       29 | MP tasks per node). In normal MPI programs **omit the -pernode
       29 | node directive** to run up to 24 MPI tasks per each node.
       29 | e directive** to run up to 24 MPI tasks per each node.
       31 | In this example, we allocate 4 nodes via the express queue i
       31 |  interactively. We set up the openmpi environment and interactively
       31 | ent and interactively run the helloworld_mpi.x program.
       31 | teractively run the helloworld_mpi.x program.
       32 | Note that the executable helloworld_mpi.x must be available withi
       32 | that the executable helloworld_mpi.x must be available within the
       34 | able, if running on the local ramdisk /tmp filesystem
       47 | ple, we assume the executable helloworld_mpi.x is present on compute n
       47 | sume the executable helloworld_mpi.x is present on compute node r1
       47 | .x is present on compute node r1i0n17 on ramdisk. We call the mpiex
       47 | nt on compute node r1i0n17 on ramdisk. We call the mpiexec whith th
       47 | i0n17 on ramdisk. We call the mpiexec whith the **--preload-binary*
       47 |  ramdisk. We call the mpiexec whith the **--preload-binary** argu
       47 | -binary** argument (valid for openmpi). The mpiexec will copy the e
       47 | ment (valid for openmpi). The mpiexec will copy the executable from
       47 | will copy the executable from r1i0n17 to the  /tmp/pbs.15210.isrv5
       47 | ble from r1i0n17 to the  /tmp/pbs.15210.isrv5 directory on r1i0n5, r1i0n6 a
       47 | /pbs.15210.isrv5 directory on r1i0n5, r1i0n6 and r1i0n7 and execut
       47 | 10.isrv5 directory on r1i0n5, r1i0n6 and r1i0n7 and execute the pr
       47 | rectory on r1i0n5, r1i0n6 and r1i0n7 and execute the program.
       49 | MPI process mapping may be contro
       51 | election of number of running MPI processes per node as well as
       51 | per node as well as number of OpenMP threads per MPI process.
       51 |  number of OpenMP threads per MPI process.
       53 | ### One MPI Process Per Node
       55 | ollow this example to run one MPI process per node, 24 threads
       55 | run one MPI process per node, 24 threads per process.
       65 | ate recommended way to run an MPI application, using 1 MPI proc
       65 | run an MPI application, using 1 MPI processes per node and 24
       65 | n an MPI application, using 1 MPI processes per node and 24 thr
       65 |  1 MPI processes per node and 24 threads per socket, on 4 node
       65 | and 24 threads per socket, on 4 nodes.
       67 | ### Two MPI Processes Per Node
       69 | ollow this example to run two MPI processes per node, 8 threads
       69 | n two MPI processes per node, 8 threads per process. Note the
       69 |  process. Note the options to mpiexec.
       79 | ate recommended way to run an MPI application, using 2 MPI proc
       79 | run an MPI application, using 2 MPI processes per node and 12
       79 | n an MPI application, using 2 MPI processes per node and 12 thr
       79 |  2 MPI processes per node and 12 threads per socket, each proc
       79 | cessor socket of the node, on 4 nodes
       81 | ### 24 MPI Processes Per Node
       81 | ### 24 MPI Processes Per Node
       83 | Follow this example to run 24 MPI processes per node, 1 thr
       83 | Follow this example to run 24 MPI processes per node, 1 thread
       83 | un 24 MPI processes per node, 1 thread per process. Note the
       83 |  process. Note the options to mpiexec.
       93 | ate recommended way to run an MPI application, using 24 MPI pro
       93 | run an MPI application, using 24 MPI processes per node, singl
       93 |  an MPI application, using 24 MPI processes per node, single th
       93 | o separate processor core, on 4 nodes.
       95 | ### OpenMP Thread Affinity
       98 |     Important!  Bind every OpenMP thread to a core!
      100 |  two examples with one or two MPI processes per node, the opera
      100 | ng system might still migrate OpenMP threads between cores. You mi
      100 |  environment variable for GCC OpenMP:
      106 | or this one for Intel OpenMP:
      112 | As of OpenMP 4.0 (supported by GCC 4.9 and
      112 | As of OpenMP 4.0 (supported by GCC 4.9 and lat
      112 |  OpenMP 4.0 (supported by GCC 4.9 and later and Intel 14.0 and
      112 | y GCC 4.9 and later and Intel 14.0 and later) the following vari
      119 | ## OpenMPI Process Mapping and Binding
      121 | The mpiexec allows for precise selection
      121 |  precise selection of how the MPI processes will be mapped to t
      123 | MPI process mapping may be specif
      123 | mapping may be specified by a hostfile or rankfile input to the mpie
      123 | be specified by a hostfile or rankfile input to the mpiexec program.
      123 | file or rankfile input to the mpiexec program. Altough all implemen
      123 | input to the mpiexec program. Altough all implementations of MPI pr
      123 | ltough all implementations of MPI provide means for process map
      123 | ng examples are valid for the openmpi only.
      125 | ### Hostfile
      127 | Example hostfile
      136 | Use the hostfile to control process placement
      146 | er in which nodes show in the hostfile
      148 | ### Rankfile
      150 | Exact control of MPI process placement and resourc
      150 | g is provided by specifying a rankfile
      154 | Example rankfile
      164 | This rankfile assumes 5 ranks will be runni
      164 | This rankfile assumes 5 ranks will be running on 4 no
      164 | es 5 ranks will be running on 4 nodes and provides exact mapp
      167 | rank 0 will be bounded to r1i0n7, so
      167 | rank 0 will be bounded to r1i0n7, socket1 core0 and core1
      167 |  0 will be bounded to r1i0n7, socket1 core0 and core1
      167 | be bounded to r1i0n7, socket1 core0 and core1
      167 |  to r1i0n7, socket1 core0 and core1
      168 | rank 1 will be bounded to r1i0n6, so
      168 | rank 1 will be bounded to r1i0n6, socket0, all cores
      168 |  1 will be bounded to r1i0n6, socket0, all cores
      169 | rank 2 will be bounded to r1i0n5, so
      169 | rank 2 will be bounded to r1i0n5, socket1, core1 and core2
      169 |  2 will be bounded to r1i0n5, socket1, core1 and core2
      169 | e bounded to r1i0n5, socket1, core1 and core2
      169 | to r1i0n5, socket1, core1 and core2
      170 | rank 3 will be bounded to r1i0n17, s
      170 | rank 3 will be bounded to r1i0n17, socket0 core1, socket1 core0
      170 | 3 will be bounded to r1i0n17, socket0 core1, socket1 core0, core1,
      170 | e bounded to r1i0n17, socket0 core1, socket1 core0, core1, core2
      170 | ed to r1i0n17, socket0 core1, socket1 core0, core1, core2
      170 | i0n17, socket0 core1, socket1 core0, core1, core2
      170 | socket0 core1, socket1 core0, core1, core2
      170 |  core1, socket1 core0, core1, core2
      171 | rank 4 will be bounded to r1i0n6, al
      171 | rank 4 will be bounded to r1i0n6, all cores on both sockets
      187 | In this example we run 5 MPI processes (5 ranks) on fo
      187 | In this example we run 5 MPI processes (5 ranks) on four n
      187 | ample we run 5 MPI processes (5 ranks) on four nodes. The ran
      187 |  (5 ranks) on four nodes. The rankfile defines how the processes wil
      187 | and bindings. Note that ranks 1 and 4 run on the same node an
      187 | ndings. Note that ranks 1 and 4 run on the same node and thei
      201 | ## Changes in OpenMPI 1.8
      201 | ## Changes in OpenMPI 1.8
      203 | Some options have changed in OpenMPI version 1.8.
      203 | ve changed in OpenMPI version 1.8.
      205 | | version 1.6.5    | version 1.8.1       |
      205 | | version 1.6.5    | version 1.8.1       |
      210 | | -bysocket        | --map-by socket
      211 | | -bycore          | --map-by core
      212 | ernode         | --map-by ppr:1:node |

>> 135 spelling errors found in 1 file
# mdspell -V
0.11.0

attaching also the .spelling, that is in the same folder. How to resolve this?

broken files.zip

prd001 avatar Feb 10 '17 10:02 prd001