cancel
Showing results for 
Search instead for 
Did you mean: 
Reply
Highlighted
Lenovo Support Partner
Posts: 451
Registered: ‎07-16-2013
Location: SI
Views: 311
Message 1 of 5

NE10032 / NE2572 / NE1072T questions

Hi,

 

I have a few questions regarding following setup (see picture below):

("ISL" and "uplinks" would be vLAG)

 

100G.jpg

Questions are:

1) Can multiple links across each pair of switches be included in single vLAG? (for example all 8 100G "uplinks" from Both NE2572 to both NE10032) - I suppose yes

2) Since NE10032 must be set to configuration 100G/40G/10, and desire is that we use 10G/25G/100G configuration for NE2572, is that an issue, since they would be connected through vLAGs? I suppose no issue?

3) Is it better to have pairs of NE1072T ISL connected or not (bcs of (non)using of STP)?

4) Any other comment regarding design in the picture?

5) Separate question - is there a plan to support NE1072T as leaf switch in Spine-Leaf design, or maybe is already supported?

 

TY in advance.

 

***

Disclaimer: I am not Lenovo employee. While I do work for Lenovo Partner, all my contributions are my personal, non-official and not that of Lenovo or my employer.


Lenovo Employee mslavin
Lenovo Employee
Posts: 135
Registered: ‎03-31-2015
Location: US
Views: 271
Message 2 of 5

Re: NE10032 / NE2572 / NE1072T questions

Hello, I've included answers and comments below:

 

1) Can multiple links across each pair of switches be included in single vLAG? (for example all 8 100G "uplinks" from Both NE2572 to both NE10032) - I suppose yes

>>> Matt>>> Yes. That is the whole purpose of vLAG, to let a pair of switches act like a single switch for an aggregation. To do this you create two 4 port aggregations on each switch, and then use vLAG config to tie them together into an 8 port aggregation (very much like you would do with Cisco vPC, but using different commands to accomplish the same task).

 

2) Since NE10032 must be set to configuration 100G/40G/10, and desire is that we use 10G/25G/100G configuration for NE2572, is that an issue, since they would be connected through vLAGs? I suppose no issue?

>>> Matt>>> Not sure I understand the question. If you are asking if the NE10032 can have ports configured for 100G/40G/10G, while the NE2572 would be configured for 10G/25G/100G, then that is not an issue, as long as any two ports connecting together are the same speed, and all links in a vLAG aggregation are the same speed.

 

3) Is it better to have pairs of NE1072T ISL connected or not (bcs of (non)using of STP)?

>>> Matt>>> Yes. The way you have it drawn there would be STP blocked links. If you added an ISL between the 1072’s and made all uplinks a single vLAG aggregation, then there would be no blocked links.

 

4) Any other comment regarding design in the picture?

>>> Matt>>> In the diagram, the pair of NE1032’s below the NE2572’s do not show an ISL. For optimal operation, these also should have an ISL and be configured for vLAG, with a single aggregation headed toward the NE2572. I have redrawn (attached) the diagram to show all switches in vLAG pairs, which would be your best design. One important rule here is you need to use a different vLAG tier ID for each pair (used to generate a unique MAC to ensure each pair produces a unique system ID).

 

5) Separate question - is there a plan to support NE1072T as leaf switch in Spine-Leaf design, or maybe is already supported?

>>> Matt>>> Can you be more specific? The definition of leaf/spine and what features are required varies. I can do a form of leaf/spine that also utilizes vLAGed pairs (L2 down from the leaf and L3 upstream toward the spine). I could do a more pure form of a leaf/spine design that is L3 everywhere, using simple static routing. Or I can create a full multiprotocol leaf/spine design that is L3 everywhere. All of these could include the NE1072T. But if you are looking for a leaf/spine design that is also running an overlay network, with VTEPs on the leaf switches, the NE1072T does not support that. So it depends on exactly what you are looking for in your leaf/spine design.

 

Hope this helps.

 

Thanks, Matt

Lenovo Support Partner
Posts: 451
Registered: ‎07-16-2013
Location: SI
Views: 217
Message 3 of 5

Re: NE10032 / NE2572 / NE1072T questions

Dear Matt,

I am sorry for my late reply. TY for your extensive information and your care. Much appreciated.

 

Regarding:
- Question 1) TY all clear now.
- Question 2) TY all clear now. Yes you understood it correctly.
- Question 3) TY all clear now.
- Question 4) TY all clear now.
- Question 5) TY all clear now. I was asking this question based on https://lenovopress.com/lp0573.pdf, where NE1072T was not mentioned.

 

Actually we are considering Spine - Leaf approach now for this project. Use case would be hybrid - both Datacenter network infrastructure and Core/Aggregation Tier for (client) LAN traffic. Please see attached concept.

 

Regarding Spine-Leaf in general and for this case I have following questions:

 

A) I suppose it is not possible to "mix" different uplink speeds and capacities (between spine and leaf switches). For example:
NE1072T has 40Gb uplinks, while NE2572 has 100Gb uplinks. In this case both NE1072T and NE2572 must be connected through 40Gb uplinks to NE10032 switches (if all are part of same setup)? Or NE2572 can be connected at 100Gb? Must all uplinks be symetric (from leaf to spines) or not? (so for example - if 2 uplinks from each NE1072T run to each NE10032, also only 2 uplinks must run from each G8296 to each NE10032 (and not more))?

 

B) Is there a rule how many uplinks can run between leafs and spines? ie can 3 uplinks from each leaf switch be used to each spine switch (since only 2 spines and mostly switches have 6 uplink ports)? Or is better to use 2 uplinks (since 3 is odd number and 2 is even)?

 

Final question regarding NE2572:
majority of vendors offer 1G speed capability also for 25G switches. NE2572 does not offer it. Will this be addressed or will not change in the future?

 

TY in advance.

 

PS I am quite new to a networking, so please forgive me for some newb questions

***

Disclaimer: I am not Lenovo employee. While I do work for Lenovo Partner, all my contributions are my personal, non-official and not that of Lenovo or my employer.


Lenovo Employee mslavin
Lenovo Employee
Posts: 135
Registered: ‎03-31-2015
Location: US
Views: 213
Message 4 of 5

Re: NE10032 / NE2572 / NE1072T questions

Hello,

 

I will try to provide answers, but I think we should consider that just because you can do something, does not mean you should do that thing. By this I mean there are many, many reasons to choose one design over another, and varies by every customers requirements. But there is an underlying mantra in all technology (in my opinion at least), and that is the “Keep It Simple” (KIS) approach whenever possible. You can design as simple or as complex as you want, and in the end, you can usually make either work, but the complex design is more likely to run into issues, and by its very nature, will be more difficult to support, than a simpler design that accomplishes the same task. Thus I am a lifelong fan of the KIS approach to networking. And with the limited information I have about your environment and requirements, the design shared in my previous response (L2 access/distribution using vLAG at various levels) would be far simpler to implement and support than a true L3 leaf/spine design as you are proposing.

 

With that said, my comments and answers (prefaced by >>> Matt>>>) in line to your questions (prefaced by >>> TM>>>):

 

>>> TM>>>PS I am quite new to a networking, so please forgive me for some newb questions
>>> Matt>>> Based on this comment and the lack of scale requirements I’ve seen so far, I would not be looking at an L3 leaf/spine design. An L3 leaf/spine design is far more complex than a typical access/distribution L2 design as was previously discussed, and will not offer any L2 connectivity between leaf switches (more than likely needed, but just a guess on my part) without introducing some kind of overlay network, which creates even more complexity for the L3 leaf/spine. If you had access to a dedicated networking resource that was familiar with networking design, configuration and support, then the L3 leaf/spine might be a more viable option, but only after that resource fully understands all of your requirements and plans for the future.

 

>>> TM>>> Final question regarding NE2572: majority of vendors offer 1G speed capability also for 25G switches. NE2572 does not offer it. Will this be addressed or will not change in the future?
>>> Matt>>> 1G support will be available on the NE2572 with the release of 10.9.1 code, currently slated for the end of this month (Oct 31st)

 

>>> TM>>> A) I suppose it is not possible to "mix" different uplink speeds and capacities (between spine and leaf switches). For example:
NE1072T has 40Gb uplinks, while NE2572 has 100Gb uplinks. In this case both NE1072T and NE2572 must be connected through 40Gb uplinks to NE10032 switches (if all are part of same setup)?
>>> Matt>>> You can do what ever you want, nothing will stop you from using different speeds in an L3 leaf spine design, but most such deployments use common speeds in the leaf/spine fabric, to ensure a balanced and deterministic environment. But nothing would prevent you from doing leaf/spine with different speed links, it is just typically not a good idea.

 

>>> TM>>> Or NE2572 can be connected at 100Gb? Must all uplinks be symetric (from leaf to spines) or not? (so for example - if 2 uplinks from each NE1072T run to each NE10032, also only 2 uplinks must run from each G8296 to each NE10032 (and not more))?
>>> Matt>>> Like the above, you can use as many or few links as desired at any point, or needed for bandwidth requirements, but it will have consequences and starts to negate the value of a typical leaf/spine design where everything is very deterministic as to paths used, number of hops, and latency between nodes. Using different speed links and different numbers of links could lead to some unexpected routing and an even more difficult to understand and support environment.

 

>>> TM>>> B) Is there a rule how many uplinks can run between leafs and spines? ie can 3 uplinks from each leaf switch be used to each spine switch (since only 2 spines and mostly switches have 6 uplink ports)? Or is better to use 2 uplinks (since 3 is odd number and 2 is even)?
>>> Matt>>> This will depend on your requirements for bandwidth and available ports in the various switches, which will depend on other factors, such as how much you plan to scale the environment before decommissioning it. It is however most common and recommended to have the same number of links to each spine switch, from each leaf switch

 

>>> Matt>>> As noted above, I do not think I would be looking at an L3 leaf/spine design. The L2 access/distribution design as shown in the attachment in my previous response has the advantage of being simple (relative to a full L3 leaf/spine design) and providing L2 connectivity end to end if needed (but L3 could be added at points if required), without the need for implementing some overlay technology such as VXLAN.

 

Hope this helps.

 

Thanks, Matt

Lenovo Support Partner
Posts: 451
Registered: ‎07-16-2013
Location: SI
Views: 211
Message 5 of 5

Re: NE10032 / NE2572 / NE1072T questions

TY for prompt reply. Will discuss it with our techies ...
***

Disclaimer: I am not Lenovo employee. While I do work for Lenovo Partner, all my contributions are my personal, non-official and not that of Lenovo or my employer.


Holiday Deals
HAPPENING NOW!

Get the best deals on PCs and tech now during the Holiday Sale
Shop the sale